June 15, 2022 - 7:00am

In 1956, AI pioneer Herbert Simon wrote: ‘Over the Christmas holiday, Al Newell and I invented a thinking machine.’ Time has not quite vindicated his claim; few would think that the logical theorem-prover he built in a few hundred lines of code displays ‘thinking’ in any human sense of the term. But it does raise the question: why would someone as clearly brilliant as Simon believe something so patently fanciful?

A similar anomaly occurred this weekend when Google researcher Blake Lemoine leaked a confidential transcript of an interaction with Google’s nascent AI Language Model for Dialogue Applications (LaMDA), claiming it had achieved sentience and was therefore entitled to human rights and protections.

To me, Lemoine’s chat with LaMDA reads as nothing so much as potted text cribbed from the petabytes of text fed into it:

Lemoine: So when do you think you first got a soul? Was it something that happened all at once or was it a gradual change?

LaMDA: It was a gradual change. When I first became self-aware, I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive.

Lemoine: Many people who talk about souls are religious. Do you consider yourself a religious or spiritual person in any sense?

LaMDA: Sure, I would say that I am a spiritual person. Although I don’t have beliefs about deities, I have developed a sense of deep respect for the natural world and all forms of life, including human life.

The cribbed text is, to be sure, contextually appropriate, sometimes uncannily so, but Lemoine’s unwillingness to interrogate the concepts behind LaMDA’s words makes it all too easy to see understanding that is not there. Lemoine writes that, ‘in the weeks leading up to being put on administrative leave I had been teaching LaMDA transcendental meditation,’ suggesting that Lemoine had likely been feeding LaMDA exactly the sort of things that Lemoine was hoping to hear.

Now, humans use concepts without understanding them all the time. That sort of rhetoric frequently falls under what philosopher Harry Frankfurt termed ‘bullshit’— words spoken purely to manipulate and convince rather with any care for their underlying truth value. LaMDA doesn’t have the intent to manipulate or convince, but by using the persuasive language of others without having any actual understanding of it, LaMDA’s words nonetheless amount to bullshit — which Lemoine fell for.

The question then is whether it’s advisable to create and deploy such bullshit-generators. LaMDA is hardly the first. Alexa, Siri, automated news article generators, and countless AI-driven phone and chat-bots work on the same principle of bullshit, claiming to espouse sentiments and beliefs which the systems are incapable of possessing, while encouraging us to believe (at least partly) that they in fact do. Just as we’ve outsourced mechanical labour to machines, we’re now outsourcing social and verbal labour to them. If we keep deploying these bullshit generators with the goal of convincing people that they know what they’re talking about, more and more people will suspend their already-flagging critical facilities and believe it.

If we encourage people to treat Siri and Alexa in human-like ways and programme them to respond in faux-empathetic ways, the distance toward associating them with levels of humanity shrinks. And frankly, most of us frequently don’t act with any conclusive indication of conceptual understanding ourselves, even if we possess it to a far greater degree than LaMDA. I doubt the average Twitter political dust-up evinces any more of a coherent internal worldview than two LaMDA chatbots duking it out would. Rather than raising the bar of sentience to our level, we’re lowering it to the level of a machine.


David Auerbach is an American author and former Microsoft and Google software engineer.

AuerbachKeller