X Close

Should we trust AI to hire and fire? Algorithms are just as biased as the humans who create them

An AI has been trained to detect the “best” interview candidates from their facial expressions and use of language

An AI has been trained to detect the “best” interview candidates from their facial expressions and use of language


October 3, 2019   5 mins

There’s a famous anecdote about AI, a sort of cautionary tale. It’s about tanks, and it’s probably not true. But it is relevant to the ongoing debate about the use of AI algorithms in hiring, or in parole, and whether they will entrench racism and sexism with a veneer of objectivity. The latest is an AI trained to detect the “best” interview candidates from their facial expressions and use of language.

Anyway, the story goes that a military AI was trained to detect tanks in photographs. It got shown lots of pictures, some with tanks in, some without, and it was told which was which. It worked out features that were common to the tank-containing pics, and then, when given a new picture from the same source, would use that info to say “yes, tank”, or “no, no tank”, as appropriate.

But apparently, when the AI was given pictures from a new source, it failed utterly. And it turned out that the AI had worked out that the photos with tanks in had been taken on sunny days, and the others on cloudy ones. So it was just classifying well-lit pics as “yes, tank”. When new pictures, taken by other sources which hadn’t been photographing sunbathing tanks, were used, the system broke down.

The AI blogger Gwern has tried to trace the story back, and it transpires there are multiple iterations of it: sometimes it’s saying tank vs no tank, sometimes it’s identifying Soviet vs American tanks; sometimes it’s ‘sunny days’ that’s the confounding factor, sometimes it’s the time of day, or that the film had been developed differently. Versions go back at least to the 1980s and possibly to the 1960s. Sometimes it’s Soviet tanks in forests, sometimes it’s Desert Storm.

There another story about an AI set a task of telling huskie dogs from wolves. All the wolves in its training data were photographed on snow, so the AI learnt to call any animal photographed against snow a wolf. In this real story, the AI was deliberately badly trained, on deliberately badly chosen training data – it was a test. But when it was trained properly, it worked much better

This story is used to make the point that any AI is only as good as the data you train it on, and it is impossible to know how good the data you’re training it on actually is.

While this may be something of an oversimplification, it’s essentially true. The reason why AI is useful – or machine learning software that uses neural networks, which is usually what people mean by AI – is that it can work through absolutely vast amounts of data and find patterns and correlations that humans can’t. No human could go through that much information.

This capacity has profound impacts. In science, the huge amounts of data that are thrown up by, for instance, biomedical research, or astronomy, can be analysed to reveal previously unexpected links: genome-wide association studies found that multiple sclerosis, is in fact, a disease of the auto-immune system, like rheumatoid arthritis, even though it presents as a neurodegenerative disease like Parkinson’s or Alzheimer’s.

But the trouble is that – pretty much by definition – it is impossible for a human to check those datasets. If a human could check them, you wouldn’t need the AI. And if a dataset is, for some reason, imperfect, then the AI will learn things from it that you won’t want it to. It won’t learn to say there’s a tank in every picture of a sunny day – modern image-recognition software is cleverer than that, and trained on much wider datasets, and there are ways around that sort of problem anyway – but it may have analogous, but more subtle and perhaps more insidious, problems.

That’s fundamentally the worry about AI being racist, or sexist. You might train your AI on some dataset of people who previously performed well at a job, or people who have or haven’t reoffended after release from prison, and those datasets contain large amounts of information about each person; years of previous experience, say, or number of previous offences – but also age, sex, ethnic makeup.

And if the training data tended to favour people of a certain sex or race, then the AI may learn to preferentially pick those people as well. It doesn’t even matter if the data isn’t the product of people being racist or sexist themselves. The training data may be full of people who really did do well at their job, or not reoffend – analogously to the training data correctly labelling whether or not a picture is of a tank.

Then it may turn out that, for societal reasons, women or black people turn out to do, on average, less well on those criteria. Just as the apocryphal AI was able to categorise the tank-pictures by whether the pic was taken in sunshine, the hiring-algorithm AI might categorise potential hires by whether they are white or male.

Even if you don’t tell the AI people’s sex or race, it may not help, because it may be able to work it out to a high degree of accuracy from proxies – postcodes, for instance, or first names.

The job-interview AI apparently looks at facial expressions, and listens to the words the interviewee uses, and compares it to 25,000 “pieces of facial and linguistic information” taken from people who have gone on to be good at a job. (Incidentally, the description that the company’s CTO gives of the analysis of verbal skills – “do you use passive or active words? Do you talk about ‘I’ or ‘we’?” – is deeply suspect from a linguistic point of view. The idea that people who say “I” a lot are narcissistic is a myth, and people talk an awful lot of crap about “the passive”.)

But, again, the AI can only be as good as the training data. It may be that “people who turned out to be good at their jobs” were more likely to have certain facial expressions or turns of phrase, and it may well be that those facial expressions or turns of phrase are more common among certain ethnic groups. And “people who turned out to be good at their jobs” are, of course, people who got hired in the first place. It is almost impossible to remove hidden biases. (I would be intrigued to know how, for instance, autistic people, or disabled people, would do on this facial-expressions stuff.)

That is not a reason to throw the whole idea of AI in hiring, or other areas, out. Algorithms – even simple ones – have been shown to do a better job of predicting outcomes than have humans in a wide variety of areas. The AI might be biased, but they are only biased because the humans they are replacing were biased too.

For instance, Amazon’s famously sexist hiring algorithm that got scrapped last year only undervalued female applicants because the human hiring decisions had been systematically undervaluing female applicants themselves. It’s not that getting rid of the AI will get rid of the bias. In that case, it made it explicit. Amazon now knows that its hiring practices were biased. But it is a reason to be extremely wary of throwing AI into the mix and saying, well, now we’re unbiased, look, an algorithm did it, so it must be OK. And it’s doubly the case where the AI algorithm is proprietary — so you can’t look into its guts and see where it goes wrong.

That’s key. I said at the beginning that it is impossible to know how good the data you’re training an AI on actually is, and that’s true. But there are things you can do to try to find out. There are researchers working on something called “provably correct software systems”, which are somewhat misnamed – you can’t ever prove that software is correct – but you can go in and check parts of the data, or the weighting of the nodes in its network, which can increase your confidence.

If an AI is owned by a private company which won’t let you go and check the data or the algorithms, though (as is the case with the hiring one, and some of the parole ones), it becomes very hard to be confident. So for the time being, it’s worth being very, very wary of anyone who says their fancy AI can tell you who’s going to be good at their job. You can probably trust it to identify tanks, though.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

2 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Simon Latham
Simon Latham
4 years ago

A positive article providing some balance, questioning the direction David Olusoga wants to take and providing an example of Emily Maitliss interrogating some left wing assumptions for a change. The left wing angle is usually uncritically accepted by the BBC and the mainstream media, which stirs up anxiety and fury when it could and should provide perspective and calm things down. “Any man’s death diminishes me” said John Donne and nobody should be criticised for lamenting the death of George Floyd, though he had several prison terms behind him. But what about the dozen innocents killed in the USA during the riots or indeed the thousands killed in black on black homicides each year in the States? The situation is different in the UK and while challenges remain, we should celebrate what we have achieved in terms of diversity and inclusion. We should also listen to Black American conservative voices who challenge the left’s narrative and have good things to say about living in America, the land of the free.

Peter Morrison
Peter Morrison
4 years ago

I think this is a well-written and thoughtful article.
There is one key difference that I think we have to consider when talking about crime across the pond.
Unlike the UK, in America it’s legal for (most) civilians to own firearms, and many states allow some provision for civilians to carry a pistol in public.* That’s why US cops are armed: there’s a much higher chance of a firearm being involved in any interaction they have with the public.
With that being said: while the presence of firearms increases the lethality of misunderstandings and errors in judgement, it does nothing to reduce the kind of deliberate violence that we interpret from the cases of George Floyd and others. If anything, I think in the UK we should pay much more attention to ‘suspicious circumstances’ in otherwise everyday cases, precisely because our officers’ “force continuum” never extends to the lethal (except for a few specially-trained armed officers in very specific contexts).

*AFAIK, the UK has a flat ban on civilian pistol ownership, and very proscriptive (and often expensive) requirements for legal long gun ownership, including a criminal history check, and evidence of competence and specific purpose.