April 8, 2021

After achieving her childhood dream of going to MIT, Joy Buolamwini tried to build an art project called “The Aspire Mirror”. The device she designed would greet her face every morning by projecting a different, inspiring image onto her reflection. A fearless lion’s face, for example. The trouble was, the off-the-shelf facial recognition program Buolamwini was using didn’t even recognise her face as a face. To get the Aspire Mirror to work, she had to don a white plastic mask.

That was the beginning of her voyage of discovery into the biases of computer code; one which would — as a new Netflix documentary, Coded Bias, shows — lead her to testify before lawmakers in the U.S. Congress. Facial recognition technology (FRT), Buolamwini proved, is often poor at identifying black faces and female faces — largely because it is “trained” on data sets with far more images of white men. The bias may be inadvertent, but, as Buolamwini points out, has very real consequences.


Like what you’re reading? Get the free UnHerd daily email

Already registered? Sign in


Still, in a biased world, do you really want FRT to perform well in recognising your face? If you’re trying to unlock your phone, probably yes. If you want to project a fearless lion’s face onto your reflection in the morning, I suppose so. But if you’re walking past a Metropolitan Police FRT van, possibly not. In Coded Bias, Buolamwini’s story is interwoven with that of Big Brother Watch, as the organisation challenges police use of FRT. In a sequence filmed on London’s streets, a 14-year-old black schoolboy is stopped and searched by plain clothes officers after his face is wrongly matched by FRT with a suspect on their database.

Previous surveys in the UK have found that ethnic minority respondents were less keen for the police to have unfettered use of FRT. Not just because the technology is less likely to wrongly tag a white schoolboy, though that is true. Ethnic minorities’ experience of bias — not their knowledge of the technology — makes them less likely to see the enhancement of police powers as a good thing. More accurate technology, trained on more diverse databases, won’t help that lack of trust in the police.

“What is the purpose of identification?” asks Apartheid historian Patric Tariq Mellet when Buolamwini visits South Africa. Mellet displays the racial classification papers that allocated South Africans to categories — based not on who they felt themselves to be, but what the state decided they were. The purpose of this identification was clear: to control where the carrier could go, what they could do, even whom they could marry.

With facial recognition technology, there is no need to carry papers. Your face is your ID card. You could be sorted by any system, not just by visible characteristics like skin colour or sex, but by linking your face to all the unseen information sitting in databases. Not only your credit history, but your internet search history. Not just your home postcode, but your mood as expressed in walking speed.

This is where the visible bias of the inadequately trained FRT systems and the invisible bias of algorithms, trained on data from an unequal past, come together. It’s relatively easy to object that a program has misclassified your face or failed to recognise you as a person. It’s much harder to discover that you’ve been wrongly tagged with a poor credit history or tendencies towards political extremism.

In any case, “wrongly” becomes meaningless when algorithms apply population-scale predictions to individuals. The best predictor of your likelihood to turn to crime is your postcode. Is it fair to tag everyone from the same neighbourhood as future criminals? The best predictor of your future educational attainment is your school’s past performance. Is it fair for an algorithm to allocate exam marks on that basis, as the A Level algorithm did last year?

Big data links together disparate sources of information to profile each of us. Systems using FRT can attach each data profile to a specific physical person in the real world. This is clearly a gift to anyone with power, political or economic. But it also threatens to transform our personal relationships.

In a beautifully simple and telling sequence filmed in China, a skateboarding young woman explains how she uses the ubiquitous Facial Recognition systems all the time. She uses her face to buy groceries, because her face is linked with her bank account. It’s also linked to her Social Credit score, which combines data from official records with “bad behaviour” like “making false reports” online. People with a low score may be denied travel on trains and aeroplanes.

Unlike most profiling in the West — where you may never know that you are paying more for flights, or that you are on a police watch list for domestic extremism — China’s Social Credit scores are public. Your trustworthiness is not a personal quality to be discovered by trial and error, but a numerical value calculated by an algorithm and available for anyone to see.

This is a good thing, says our Chinese woman. When she meets somebody new, she doesn’t have to use her own judgment to decide whether to trust him. She can save time by checking his Social Credit score before deciding whether to be his friend. But trust is not a substance to be quantified, like a bank balance. It’s a relationship. We trust our friends and family because we have built up bonds of mutual commitment, of empathy and intimacy, by getting to know them, by sharing our lives with them, by opening ourselves to each other and taking the risk of being let down or betrayed.

Where in China the algorithmic sorting is compulsory, in the democratic West choices remain. Of course, applicants for welfare, and those arrested suspects seeking bail, can’t opt out of systems run by algorithms — systems in which the stakes are high, and algorithms encoded with prejudice. But because we live in democracies, we can object to how these algorithms are used. There is no reason, in principle, why algorithmic relationships between institutions and individuals should not also be subject to democratic oversight, and in some cases they are.

But this brings us to the hardest, most important question: why don’t we object more often? Why are we so relaxed about being identified and sorted by machines? Most people in the UK do not think that police use of Facial Recognition Technology should be banned. The idea that we should all have digital “Vaccine Passports” or “Immunity Certificates” to resume public life is welcomed by many. In spite of repeated scandals about how our data is collected and used, most of us continue to use social media to interact with our friends.

Of course, when unfair outcomes emerge from machines programmed to learn from the past, like the Amazon hiring algorithm that discriminated against female applicants because past employees tended to be men, we object. But the general principle that machines should be able to recognise us and sort us into categories that will determine our options in life does not seem to bother most of us. In many ways, we still put our faith in machines to be fairer than humans, less biased, more objective.

Is this fatalism or ignorance of the true extent of the power relationships embedded in the algorithms? Or do we, like the Chinese interviewee in Coded Bias, simply not want to exercise our own judgment about the people we meet? Perhaps we like the ways that technology mitigates the riskiness of unmediated human life. Instead of taking responsibility for hiring this person and rejecting that one, why not write some code that can choose your next employee?

We tend to think of algorithms as tools in the hands of the powerful, guided by super-intelligent people to achieve their sinister ends. But although the technology has powerful effects on the lives of individuals, it also veils the weakness of those who use it. They lack the courage to exercise judgment. They lack a clear vision of a future to steer towards. They automate relationships with the people they should be persuading, or inspiring, or helping, or leading. It’s not just the bias in these systems that should trouble us. It’s the rush to abandon human agency to them.