X Close

Why the Matrix needs humans

Ruled by AI: The Matrix is 20 years old this year

Ruled by AI: The Matrix is 20 years old this year


April 26, 2019   4 mins

Unbelievably, The Matrix turned 20 this year. Released on the 31st March 1999, it introduced an apparently humdrum world, which turns out to be an illusion. In reality, the Earth is ruled by all-powerful machines who have trapped humanity in a hyper-realistic simulation. The only reason why the machines haven’t wiped us out is that they, er, need us as an energy source. One might have thought that these ultra-sophisticated artificial intelligences would have devised a more reliable way of generating power than from a bunch of dribbling, smelly, hairless apes with rebellious tendencies – but there you go.

There’s a parallel between the world of The Matrix and what I’m assuming is the actual real world. Of course, our own AI systems aren’t about to achieve sentience, let alone the ability to enslave us. But here’s the parallel: though AI can ‘learn’, the learning process is in most cases deeply dependent on human beings. I don’t just mean the clever researchers who create the hardware and develop the software – I also mean the people who supply the data that’s the basic fuel of the learning process.

Those people include you and me. When you fill in one of those ‘I am not a robot’ forms on a website – perhaps by decyphering a piece of distorted text or clicking on images that contain a specified object – you are creating labelled data that an AI system can learn from. By spotting commonalities in labelled data items, the system learns to identify unlabelled data items that contain the same patterns. And, of course, if they’ve done it once, they can do it again and again with super-human speed and attention to detail.

However, every time we want an AI system to learn a new pattern it needs another source of human labelled data – this is the ‘energy source’ that powers AI’s continued progress.

In an important piece for Bloomberg, Matt Day, Giles Turner and Natalia Drozdiak write about the workers who help Alexa – the digital assistant who ‘lives’ in Amazon’s smart speaker products – to get better at her job :

“Amazon.com Inc. employs thousands of people around the world to help improve the Alexa digital assistant powering its line of Echo speakers. The team listens to voice recordings captured in Echo owners’ homes and offices. The recordings are transcribed, annotated and then fed back into the software as part of an effort to eliminate gaps in Alexa’s understanding of human speech and help it better respond to commands.”

This is the sort of thing the Alexa’s human assistants do for her:

“One worker in Boston said he mined accumulated voice data for specific utterances such as ‘Taylor Swift’ and annotated them to indicate the searcher meant the musical artist.”

It’s not that Amazon and other companies in the field are denying this. Indeed, the whole point of a smart speaker is that it’s listening to us, ready to act upon our desires.

Is it a problem that Amazon’s human workforce may be listening in too? The article includes a statement from Amazon assuring us that they “only annotate an extremely small sample of Alexa voice recordings in order [to] improve the customer experience” and that “employees do not have direct access to information that can identify the person or account as part of this workflow”. (But – see this follow-up report from Matt Day and his Bloomberg colleagues).

A quick flick through UnHerd’s tech coverage should make it clear that we’re not exactly uncritical of the big tech companies. However, on this issue, I don’t think we should panic – yet. Day tells us that 78 million smart speakers were sold last year alone. To the extent that the tech companies can afford to pay people to listen in, it will be to gain a competitive edge in improving the technology itself. I doubt they’re interested in our private gossip. We give away enough of that to Facebook anyway.

If the AI companies are downplaying the human side of their operations, it’s to make the AI side seem more impressive than it actually is so far. It’s an uncomfortable irony that this most futuristic of technologies is dependent on so much mundane human labour.

As with so much else in the tech industry, costs are kept down by outsourcing much of the work to low-wage countries. However, this leaves companies with a problem when they want to train up AI systems in languages that are only spoken in high-wage countries.

Writing for The Verge, Angela Chen looks at the situation in Finland – a country whose beautiful but difficult language is spoken pretty much exclusively by Finns. Where can you find enough people both willing and able to do the laborious work of teaching an AI how to process spoken or written Finnish? Chen reports that one start-up has hit upon a solution: a joint venture with the authorities to get the country’s prisoners to do the work. Apparently, it’s preferred to traditional prison work – which includes metal-smithing – because it doesn’t involve equipment that can be used as a weapon. 

It’s also a great example of the domain dependency of AI – however well it learns one task it is extremely limited in applying that skill to another task and thus must turn once more to human tutors. Far from finding ourselves trapped in a computer-generated simulation, the fact is that AI lives within the ‘human matrix’ – the various bubbles of language and thought through which we perceive and conceptualise the real world.

Of course, we shouldn’t forget that the army of human workers currently being used to fuel the development of AI is ultimately working towards its own obsolescence – a state in which computers will be able to recognise anything we can recognise and understand anything we can understand.

This will mean a world of surveillance unlimited by the availability of human eyes and ears. Potentially anything we say or do within sight or hearing of a machine will be looked at, listened to and, in some way, interpreted. Just how much power that will give them (and their controllers) and what they do with it remains to be seen.

 


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments