There’s a famous anecdote about AI, a sort of cautionary tale. It’s about tanks, and it’s probably not true. But it is relevant to the ongoing debate about the use of AI algorithms in hiring, or in parole, and whether they will entrench racism and sexism with a veneer of objectivity. The latest is an AI trained to detect the “best” interview candidates from their facial expressions and use of language.
Anyway, the story goes that a military AI was trained to detect tanks in photographs. It got shown lots of pictures, some with tanks in, some without, and it was told which was which. It worked out features that were common to the tank-containing pics, and then, when given a new picture from the same source, would use that info to say “yes, tank”, or “no, no tank”, as appropriate.
But apparently, when the AI was given pictures from a new source, it failed utterly. And it turned out that the AI had worked out that the photos with tanks in had been taken on sunny days, and the others on cloudy ones. So it was just classifying well-lit pics as “yes, tank”. When new pictures, taken by other sources which hadn’t been photographing sunbathing tanks, were used, the system broke down.
The AI blogger Gwern has tried to trace the story back, and it transpires there are multiple iterations of it: sometimes it’s saying tank vs no tank, sometimes it’s identifying Soviet vs American tanks; sometimes it’s ‘sunny days’ that’s the confounding factor, sometimes it’s the time of day, or that the film had been developed differently. Versions go back at least to the 1980s and possibly to the 1960s. Sometimes it’s Soviet tanks in forests, sometimes it’s Desert Storm.
There another story about an AI set a task of telling huskie dogs from wolves. All the wolves in its training data were photographed on snow, so the AI learnt to call any animal photographed against snow a wolf. In this real story, the AI was deliberately badly trained, on deliberately badly chosen training data – it was a test. But when it was trained properly, it worked much better
This story is used to make the point that any AI is only as good as the data you train it on, and it is impossible to know how good the data you’re training it on actually is.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeA positive article providing some balance, questioning the direction David Olusoga wants to take and providing an example of Emily Maitliss interrogating some left wing assumptions for a change. The left wing angle is usually uncritically accepted by the BBC and the mainstream media, which stirs up anxiety and fury when it could and should provide perspective and calm things down. “Any man’s death diminishes me” said John Donne and nobody should be criticised for lamenting the death of George Floyd, though he had several prison terms behind him. But what about the dozen innocents killed in the USA during the riots or indeed the thousands killed in black on black homicides each year in the States? The situation is different in the UK and while challenges remain, we should celebrate what we have achieved in terms of diversity and inclusion. We should also listen to Black American conservative voices who challenge the left’s narrative and have good things to say about living in America, the land of the free.
I think this is a well-written and thoughtful article.
There is one key difference that I think we have to consider when talking about crime across the pond.
Unlike the UK, in America it’s legal for (most) civilians to own firearms, and many states allow some provision for civilians to carry a pistol in public.* That’s why US cops are armed: there’s a much higher chance of a firearm being involved in any interaction they have with the public.
With that being said: while the presence of firearms increases the lethality of misunderstandings and errors in judgement, it does nothing to reduce the kind of deliberate violence that we interpret from the cases of George Floyd and others. If anything, I think in the UK we should pay much more attention to ‘suspicious circumstances’ in otherwise everyday cases, precisely because our officers’ “force continuum” never extends to the lethal (except for a few specially-trained armed officers in very specific contexts).
*AFAIK, the UK has a flat ban on civilian pistol ownership, and very proscriptive (and often expensive) requirements for legal long gun ownership, including a criminal history check, and evidence of competence and specific purpose.