X Close

How racist is your computer?

Iris scanner. Owen Humphreys/PA Archive/PA Images

Iris scanner. Owen Humphreys/PA Archive/PA Images


August 1, 2017   3 mins

Computers can’t be racist. In fact, they can’t be accused of any sort of state-of-mind for the simple reason that they don’t have minds.

But remember the first rule of information processing: garbage in, garbage out. For instance, feed a computer with the wrong information about your credit history, it’s unlikely to give you the right credit rating.

As Laura Hudson points out in a salutary briefing for FiveThirtyEight, loan approvals are just one of the things that computers are ‘deciding’ these days:

“Rather than relying on human judgment alone, organizations are increasingly asking algorithms to weigh in on questions that have profound social ramifications, like whether to recruit someone for a job, give them a loan, identify them as a suspect in a crime, send them to prison or grant them parole.”

If the alternative is a parole board composed of potentially racist human beings, a prisoner may prefer to take his (or her) chances with a fully computerised points-based system. But if the points are being totted up on the basis of data produced by a structurally-biased criminal justice system, then the prisoner could be worse off:

“Black people are arrested more often than whites, even when they commit crimes at the same rates. Black people are also sentenced more harshly and are more likely to searched or arrested during a traffic stop. That’s context that could be lost on an algorithm (or an engineer) taking those numbers at face value…”

A flesh-and-blood parole board might be using information derived from the same source, but there’s at least a chance that its members will exercise some wisdom in their interpretation of the data. A computer, though, once programmed, is set in its ways:

“Biased data can create feedback loops that function like a sort of algorithmic confirmation bias, where the system finds what it expects to find rather than what is objectively there.”

Compounding the issue is that people tend to believe what their silicon pals tell them. Indeed, a variation on ‘garbage in, garbage out’ is ‘garbage in, gospel out’.

For instance, if you were presented with ‘evidence’ supplied by a police force with a known history of racism, you’d  be on your guard. But what if that same information were to be computerised, combined with other data sources, processed through a complex algorithm and presented in a different place and at a later time? The original context would be lost even if its ultimate effect remained the same. One might call this ‘data laundering’.

Is there anything we can do to counter this effect? Hudson outlines one approach:

“Advocates say the first step is to start demanding that the institutions using these tools make deliberate choices about the moral decisions embedded in their systems, rather than shifting responsibility to the faux neutrality of data and technology.”

But what do “deliberate decisions” mean in this context? Does it mean rewriting algorithms to counteract real (and perceived) biases? If so, the adjustment could itself be subject to bias.

For instance, imagine if testing in schools were to be handled through a national computerised examination system. Such a facility would generate huge amounts of useful data about the way that different groups of pupils learned new facts and skills. But what if the same data were to reveal persistent differences in the cognitive processes of boys and girls (for which a growing body of evidence already exists). To some activists this might be regarded as ‘neurosexism’ – evidence only of bias in the way that the sexes are treated in the classroom and elsewhere. They might therefore demand a tweak to the analytical software in order to ‘correct’ the offending data.

And therein lies a further danger: Rather than trying to establish the existence or non-existence of original biases, we will — in a world where human judgment is increasing mediated through algorithms – seek only to change the algorithm. This would amount the laundering not just of data, but of reality itself.


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments