There’s a famous anecdote about AI, a sort of cautionary tale. It’s about tanks, and it’s probably not true. But it is relevant to the ongoing debate about the use of AI algorithms in hiring, or in parole, and whether they will entrench racism and sexism with a veneer of objectivity. The latest is an AI trained to detect the “best” interview candidates from their facial expressions and use of language.
Anyway, the story goes that a military AI was trained to detect tanks in photographs. It got shown lots of pictures, some with tanks in, some without, and it was told which was which. It worked out features that were common to the tank-containing pics, and then, when given a new picture from the same source, would use that info to say “yes, tank”, or “no, no tank”, as appropriate.
But apparently, when the AI was given pictures from a new source, it failed utterly. And it turned out that the AI had worked out that the photos with tanks in had been taken on sunny days, and the others on cloudy ones. So it was just classifying well-lit pics as “yes, tank”. When new pictures, taken by other sources which hadn’t been photographing sunbathing tanks, were used, the system broke down.
The AI blogger Gwern has tried to trace the story back, and it transpires there are multiple iterations of it: sometimes it’s saying tank vs no tank, sometimes it’s identifying Soviet vs American tanks; sometimes it’s ‘sunny days’ that’s the confounding factor, sometimes it’s the time of day, or that the film had been developed differently. Versions go back at least to the 1980s and possibly to the 1960s. Sometimes it’s Soviet tanks in forests, sometimes it’s Desert Storm.
There another story about an AI set a task of telling huskie dogs from wolves. All the wolves in its training data were photographed on snow, so the AI learnt to call any animal photographed against snow a wolf. In this real story, the AI was deliberately badly trained, on deliberately badly chosen training data – it was a test. But when it was trained properly, it worked much better
This story is used to make the point that any AI is only as good as the data you train it on, and it is impossible to know how good the data you’re training it on actually is.
While this may be something of an oversimplification, it’s essentially true. The reason why AI is useful – or machine learning software that uses neural networks, which is usually what people mean by AI – is that it can work through absolutely vast amounts of data and find patterns and correlations that humans can’t. No human could go through that much information.
This capacity has profound impacts. In science, the huge amounts of data that are thrown up by, for instance, biomedical research, or astronomy, can be analysed to reveal previously unexpected links: genome-wide association studies found that multiple sclerosis, is in fact, a disease of the auto-immune system, like rheumatoid arthritis, even though it presents as a neurodegenerative disease like Parkinson’s or Alzheimer’s.
But the trouble is that – pretty much by definition – it is impossible for a human to check those datasets. If a human could check them, you wouldn’t need the AI. And if a dataset is, for some reason, imperfect, then the AI will learn things from it that you won’t want it to. It won’t learn to say there’s a tank in every picture of a sunny day – modern image-recognition software is cleverer than that, and trained on much wider datasets, and there are ways around that sort of problem anyway – but it may have analogous, but more subtle and perhaps more insidious, problems.
That’s fundamentally the worry about AI being racist, or sexist. You might train your AI on some dataset of people who previously performed well at a job, or people who have or haven’t reoffended after release from prison, and those datasets contain large amounts of information about each person; years of previous experience, say, or number of previous offences – but also age, sex, ethnic makeup.
And if the training data tended to favour people of a certain sex or race, then the AI may learn to preferentially pick those people as well. It doesn’t even matter if the data isn’t the product of people being racist or sexist themselves. The training data may be full of people who really did do well at their job, or not reoffend – analogously to the training data correctly labelling whether or not a picture is of a tank.
Then it may turn out that, for societal reasons, women or black people turn out to do, on average, less well on those criteria. Just as the apocryphal AI was able to categorise the tank-pictures by whether the pic was taken in sunshine, the hiring-algorithm AI might categorise potential hires by whether they are white or male.
Even if you don’t tell the AI people’s sex or race, it may not help, because it may be able to work it out to a high degree of accuracy from proxies – postcodes, for instance, or first names.
The job-interview AI apparently looks at facial expressions, and listens to the words the interviewee uses, and compares it to 25,000 “pieces of facial and linguistic information” taken from people who have gone on to be good at a job. (Incidentally, the description that the company’s CTO gives of the analysis of verbal skills – “do you use passive or active words? Do you talk about ‘I’ or ‘we’?” – is deeply suspect from a linguistic point of view. The idea that people who say “I” a lot are narcissistic is a myth, and people talk an awful lot of crap about “the passive”.)
But, again, the AI can only be as good as the training data. It may be that “people who turned out to be good at their jobs” were more likely to have certain facial expressions or turns of phrase, and it may well be that those facial expressions or turns of phrase are more common among certain ethnic groups. And “people who turned out to be good at their jobs” are, of course, people who got hired in the first place. It is almost impossible to remove hidden biases. (I would be intrigued to know how, for instance, autistic people, or disabled people, would do on this facial-expressions stuff.)
That is not a reason to throw the whole idea of AI in hiring, or other areas, out. Algorithms – even simple ones – have been shown to do a better job of predicting outcomes than have humans in a wide variety of areas. The AI might be biased, but they are only biased because the humans they are replacing were biased too.
For instance, Amazon’s famously sexist hiring algorithm that got scrapped last year only undervalued female applicants because the human hiring decisions had been systematically undervaluing female applicants themselves. It’s not that getting rid of the AI will get rid of the bias. In that case, it made it explicit. Amazon now knows that its hiring practices were biased. But it is a reason to be extremely wary of throwing AI into the mix and saying, well, now we’re unbiased, look, an algorithm did it, so it must be OK. And it’s doubly the case where the AI algorithm is proprietary — so you can’t look into its guts and see where it goes wrong.
That’s key. I said at the beginning that it is impossible to know how good the data you’re training an AI on actually is, and that’s true. But there are things you can do to try to find out. There are researchers working on something called “provably correct software systems”, which are somewhat misnamed – you can’t ever prove that software is correct – but you can go in and check parts of the data, or the weighting of the nodes in its network, which can increase your confidence.
If an AI is owned by a private company which won’t let you go and check the data or the algorithms, though (as is the case with the hiring one, and some of the parole ones), it becomes very hard to be confident. So for the time being, it’s worth being very, very wary of anyone who says their fancy AI can tell you who’s going to be good at their job. You can probably trust it to identify tanks, though.