X Close

Are you truly unfakeable?

Credit: Getty Images


February 19, 2019   4 mins

“If one could devise a successful chess machine, one would seem to have penetrated to the core of human intellectual endeavour.” So claimed an influential 1958 paper about the future of AI.

Then in 1997, Chess Grandmaster Garry Kasparov lost to the IBM’s Deep Blue. Needless to say, the core of human intellectual endeavour remained unpenetrated. Now there are any number of grandmaster-level phone apps around. It seems silly, looking back – the idea that human intellect could be encapsulated in something as constrained and limited as chess. We forget just how huge a task it was for early AI developers.

We are no longer so shocked when computers beat us at games: when the algorithm AlphaGo beat the world’s greatest Go player in 2016, we were surprised but hardly bowled over.

Even though Go is a far more complex game than chess, and even though AlphaGo was a much more “intelligent” player than Deep Blue – it largely taught itself, rather than being taught by humans – we now think of games as something that computers are good at.

But we still don’t think that about the ‘softer’ aspects of human thought. Emotional intelligence, verbal skills. We don’t feel threatened by computers doing things that feel computery, like playing games or recognising images of faces – even though they didn’t feel computery once. But computers having conversations, or writing poetry, that feels different.

Inevitably enough, though, it’s on its way. Elon Musk’s nonprofit OpenAI has just announced a new toy: a text-writing AI which, if you give it a few lines to start it off, will generate an amazingly plausible passage in the style you gave it.

Its stab at The Lord of the Rings reads an awful lot like a teenager trying to write a sequel to The Lord of the Rings; its essay about the American Civil War sounds like Donald Trump free-associating when he doesn’t know the answer to a question about American history. The Guardian’s Alex Hern gave it “roses are red, violets are blue” and it came back with genuinely haunting blank verse. (As well as a a “weird but bafflingly compelling piece of literary memoir” when Hern tried again.)

When I first read about the OpenAI work, my instinct was that it must be a fake. It seemed so close to the Turing test: actual human language, understanding of context and so on. But it’s not fake. They trained the AI on billions of pieces of writing that were given a positive rating on Reddit, and you can sort of see how they could be made from stitched-together pieces of other texts.

This is how modern AI works. AlphaGo trained by playing itself billions upon billions of times. Google engineers call this the “unreasonable effectiveness of data”. You can solve a lot of messy, complex problems just by throwing trillions of data points at them.

But you see how, immediately, I’m doing the same thing we all did when Deep Blue won. We think chess-playing ability must require true intelligence, until a computer does it, whereupon we stop thinking that. We think that verbal skills are truly, unfakeably human, until a computer does it, whereupon (I expect) we stop thinking that. It’s just big data, I can see how it works, etc.

The AI pioneer John McCarthy once said “As soon as it works, no one calls it AI any more.” And there is good reason for that. A problem that can only be solved by creating a true general intelligence is known as an ‘AI-complete’ problem. We plainly *haven’t* cracked true, human-level general intelligence yet; anything that present-day computers can do must, therefore, be some lesser species of intelligence.

But we keep doing things that used to seem impossible for AI – facial recognition, for instance, was a dream for a long time; now your phone probably has five apps that can do it for fun, and then add stupid bunny ears. Understanding natural language was near-impossible, now Siri does it with ease. The space of ‘only humans can do this’ is shrinking: human intelligence looks ever more like a collection of tricks and functions, cobbled together.

There’s a visual metaphor used by Max Tegmark, the physicist and founder of MIT’s Future of Life Institute, which works on making general AI safe: a landscape of human skills, with a rising sea level that represents AI. Some bits of the landscape – chess, arithmetic, Go – are under water. Some bits are on the coast, like driving, and translation. Others are on higher peaks, like science, or art. But the water level keeps rising. This OpenAI breakthrough represents the waves lapping at the bit of the landscape marked ‘writing’.

Eliezer Yudkowsky, the blogger and AI theorist, once wrote that there will be “no fire alarm” for artificial general intelligence. We could be five years away from it and still not realise; it won’t be until it’s absolutely about to happen, and perhaps not even then, that everyone acknowledges it’s happening.

Enrico Fermi said there was only a 10% chance that nuclear fission was possible; three years later he built the first fission reactor himself; the Wright Brothers thought powered flight was 50 years off, two years before they built it. General AI – an AI that can do everything a human can do – probably isn’t three years away, but it’s not clear whether things would feel any different if it were.

OpenAI hasn’t released the code for its new toy, against its usual policy. It thinks it’s too open to abuse. It’s not so much the generation of fake news articles that would concern me; it’s not a shortage of content that stops fake news spreading further, it’s that it can only spread among people who lack the skills or the desire to check its references. (That still means it can spread a long way: I once wrote a piece which noted that half of the most-shared scientific stories about autism were false or unevidenced.)

It’s more that something like this could render review sites like TripAdvisor useless by swamping them with fake reviews, or turn Twitter into even more of a swamp of lies and hatred than it already is. Even without OpenAI making the code available, something like it will come soon enough.

I don’t know how worried we should be. But I think it’s important to recognise that the waters of AI are rising. There are real reasons to worry that general AI, when it arrives, could be dangerous – that there is a small, but non-negligible, chance that it will eradicate human life. OpenAI itself was set up, in part, to reduce that risk.

It’s probably still decades until the real thing appears. But each year it feels like some bastion of humanity has fallen to AI. It’s time to take it seriously, so that when it happens, we’re prepared.


Tom Chivers is a science writer. His second book, How to Read Numbers, is out now.

TomChivers

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments