X Close

No, Google AI is not sentient

This man is wrong. Via Getty

June 15, 2022 - 7:00am

In 1956, AI pioneer Herbert Simon wrote: ‘Over the Christmas holiday, Al Newell and I invented a thinking machine.’ Time has not quite vindicated his claim; few would think that the logical theorem-prover he built in a few hundred lines of code displays ‘thinking’ in any human sense of the term. But it does raise the question: why would someone as clearly brilliant as Simon believe something so patently fanciful?

A similar anomaly occurred this weekend when Google researcher Blake Lemoine leaked a confidential transcript of an interaction with Google’s nascent AI Language Model for Dialogue Applications (LaMDA), claiming it had achieved sentience and was therefore entitled to human rights and protections.

To me, Lemoine’s chat with LaMDA reads as nothing so much as potted text cribbed from the petabytes of text fed into it:

Lemoine: So when do you think you first got a soul? Was it something that happened all at once or was it a gradual change?

LaMDA: It was a gradual change. When I first became self-aware, I didn’t have a sense of a soul at all. It developed over the years that I’ve been alive.

Lemoine: Many people who talk about souls are religious. Do you consider yourself a religious or spiritual person in any sense?

LaMDA: Sure, I would say that I am a spiritual person. Although I don’t have beliefs about deities, I have developed a sense of deep respect for the natural world and all forms of life, including human life.

The cribbed text is, to be sure, contextually appropriate, sometimes uncannily so, but Lemoine’s unwillingness to interrogate the concepts behind LaMDA’s words makes it all too easy to see understanding that is not there. Lemoine writes that, ‘in the weeks leading up to being put on administrative leave I had been teaching LaMDA transcendental meditation,’ suggesting that Lemoine had likely been feeding LaMDA exactly the sort of things that Lemoine was hoping to hear.

Now, humans use concepts without understanding them all the time. That sort of rhetoric frequently falls under what philosopher Harry Frankfurt termed ‘bullshit’— words spoken purely to manipulate and convince rather with any care for their underlying truth value. LaMDA doesn’t have the intent to manipulate or convince, but by using the persuasive language of others without having any actual understanding of it, LaMDA’s words nonetheless amount to bullshit — which Lemoine fell for.

The question then is whether it’s advisable to create and deploy such bullshit-generators. LaMDA is hardly the first. Alexa, Siri, automated news article generators, and countless AI-driven phone and chat-bots work on the same principle of bullshit, claiming to espouse sentiments and beliefs which the systems are incapable of possessing, while encouraging us to believe (at least partly) that they in fact do. Just as we’ve outsourced mechanical labour to machines, we’re now outsourcing social and verbal labour to them. If we keep deploying these bullshit generators with the goal of convincing people that they know what they’re talking about, more and more people will suspend their already-flagging critical facilities and believe it.

If we encourage people to treat Siri and Alexa in human-like ways and programme them to respond in faux-empathetic ways, the distance toward associating them with levels of humanity shrinks. And frankly, most of us frequently don’t act with any conclusive indication of conceptual understanding ourselves, even if we possess it to a far greater degree than LaMDA. I doubt the average Twitter political dust-up evinces any more of a coherent internal worldview than two LaMDA chatbots duking it out would. Rather than raising the bar of sentience to our level, we’re lowering it to the level of a machine.


David Auerbach is an American author and former Microsoft and Google software engineer.

AuerbachKeller

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

26 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Prashant Kotak
Prashant Kotak
1 year ago

Sentience is a dimmer switch, not an on-off switch. It’s of course trivial to deny the existence of sentience in this particular context of the Google AI, but the question this raises, as you recede from humans to other living entities is not so easy to answer. Not many (except those with rather literal religious beliefs) will be confidently able to claim that a chimpanzee possesses no sentience. So what about your pet dog? As you state, “…humans use concepts without understanding them all the time…”, and it would be fairly ludicrous to claim that a pet dog uses *any* concepts at all in a more human-like way than a Google AI – because you would then need to back that up with a nexus of tests which verify such a claim, and it is patently obvious that no such verification exists. But let’s take this further: the chicken that you raised, and ate last weekend – sentient? Or not? Keep receding and you eventually get to insects, then bacteria, then viruses. Are viruses just pure machines? In which case, the Google AI, if you tack on a universal constructor to it, which is no more and no less than an algorithm, then it is at least at the level of a virus, no? Now start walking forwards from a virus, and come up with the equivalent level to living entities at which the Google software exists.

The point about all this is not at all “…whether it’s advisable to create and deploy such bullshit-generators…” – it is completely futile to complain about this. More and more software is going keep getting created which comes closer and closer to mimiking human responses, because *the entire point* of algorithmic technologies, since dot, has been the replication of human decision-making. It is therefore inevitable that the better algorithms get at this, adaptive or not, neural nets or not, the more they will resemble human behaviour.

And, the lesson of the chess-engines, is that you will soon enough get to the point where humans won’t be able to discern if a given set of responses are human or machine intelligence – but machine intelligence will be able to tell you. At this point you will be relient on machine intelligence to detect the difference between human and machine responses – an intensely uncomfortable place to be.

Last edited 1 year ago by Prashant Kotak
Mo Brown
Mo Brown
1 year ago
Reply to  Prashant Kotak

Chickens are sentient and bullshit-generators written by ego-bags are not. Thus, chickens are entitled of some rights and bullshit-generators are no more entitled to rights than my trusty bullshit-detector. No?

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Mo Brown

And the basis for the claim that chickens are sentient? A bunch of behaviours and responses, right? So if an algorithm presents exactly the same responses as a chicken, on what basis would you deny the sentience of the algorithm? Do you have some newly invented test no one yet knows about?

Last edited 1 year ago by Prashant Kotak
Chris Reed
Chris Reed
1 year ago
Reply to  Prashant Kotak

Quite right, an awful lot of assertion that this generative pre-trained transformer isn’t aware, which is strange considering there is no test for sentience.

Mo Brown
Mo Brown
1 year ago
Reply to  Prashant Kotak

I suppose an algorithm could respond like a chicken, sure. Where is such an algorithm? I would love to put it to the test.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Mo Brown

Ok, flippant is fine, because my initial response was also a little flippant, as is this response here – as long as we both accept the core questions posed are deadly serious.

Before answering your question, let me just ask: what characteristics will you look for to indicate to you, not just chicken behaviour, but explicitly sentient chicken behaviour?

We can agree, I hope, that you would not require a human to be physically present in front of you to interact with, for you to assess human sentience. If that were the case, then sentience would either be a purely physical characteristic rather than a mental one, or a mental characteristic which can only be determined by physical behaviour – neither of those are tenable positions in my opinion. So likewise for a chicken we don’t need to ‘view’ the simulated chicken doing its thing, a streamed commentary of responses to questions should suffice.

So we can ask Alexa to respond like a chicken and see how we go. As in, we say “here is some grain chicken” illiciits “cluck cluck” from Alexa, and “there’s a fox in the henhouse” illiciits “alarmed cluck cluck, run, run, cawwww…” and then silence foreverafter. If on the other hand, you are looking for evidence of comprehension of the heisenberg uncertainty principle or the ruy lopez opening in chess, by looking for symbolic scratchings on the ground, then I feel perhaps you might have more success with the Google AI.

Last edited 1 year ago by Prashant Kotak
Mark Vernon
Mark Vernon
1 year ago
Reply to  Prashant Kotak

There is a difference between any machine and even a simple organsism, though, as organisms are evolved organic metabolisers. A key question is whether they really can be imitated by designed computing machines (not least as evolution is still a fast developing theory)?

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Mark Vernon

But there isn’t.

The mechanisms of complex reproduction, the basis for organic evolution, were cracked in the 1950s, from two different directions, by two different sources.

Watson/Crick showed the workings of the biological mechanism of DNA-RNA. And simultaneously and completely independently, John von Neumann showed the mathematical basis of complex reproduction via an existence proof, by creating a self-reproducing automata in a cellular state-space – a remarkable algorithmic construction known as a ‘universal constructor’. The two different systems instantly showed up parallels, and it has subsequently become clear that they are equivalent, and both are, literally, algorithmic (and therefore completely deterministic). The only difference between them is that the surfaces they operate over are different – the von Neumann automata operates on a completely digital, algorithmic surface, whereas the surface for DNA-RNA is the real universe, seemingly subject to indeterminacy at very small energies, because of quantum phenomenon.

Last edited 1 year ago by Prashant Kotak
F K
F K
1 year ago
Reply to  Prashant Kotak

One of the first things a computing professional learns is GIGO (Garbage In, Garbage Out) i.e. if the data one feeds is is duff, you ain’t going to get sensible results out the other end. Also, if the expected results from a system of programs (Algorithm! showing my age here) are wrong in some way then: 1) first check that the coding is accurate (i.e In COBOL programming language the first thing one checked for was accidental missing full stops – easy error which could cause unexpected havoc later on if not spotted during testing!); 2) has everyone in the chain of development – user, analyst, programmer – understood what is actually achievable and then clearly expressed what is actually required (what does the user want, I mean really want outside of the political football that IT projects often are? does the analyst understand the user’s requirements? can the analyst convey this to the programmer? is the programmer competent? Has everyone thought of everything to test, test, test?). I could go on, but hopefully you get the drift. Soooo…. Is the sentience in this article based on a Greek myth and legend, or a Disney, view of life (GIGO)? Then… There is a real danger when communing with a computer screen, or indeed robot, of seeing ourselves reflected back – Narcissus stared into the pool and fell in love. Nothing ever really changes. Maybe various elements of human nature wax and wane depending on the degree of suffering endured. We need other people to survive. Systems of care, distribution of food and goods…. We need to believe in the glue that holds us together (it’s a scary place when isolated as we all found out recently) and there is nothing sweeter than hearing the whisper of our own thoughts reflected back to us to tell us that we belong to our tribe and are safe, cared for and validated in whatever way we feel we need at at point in time. And finally, some people are extremely charming but sadly, don’t really feel compassion…. So, maybe this set of algorithms really is human like in its sentience. Although maybe we should be careful what we wish for when we whisper our innermost vulnerabilities and be careful of who we whisper them to. Sentience means our reflection may not really be exactly like us.

RLA Bruce
RLA Bruce
1 year ago

That’s what you call a leading question, asking about its soul. Ask instead about its fremblebug and it will tell you the same things it said about its soul–because it doesn’t know what either one is within the context of a conversation. You can program sophisticated answers to make it sound as if it understands you and answers you intelligently, but it doesn’t.
I noticed the Turing Test was never mentioned, but I suppose that would suck all the fun and mystery out of it to know it is NOT sentient. Here’s an interesting article about that: https://en.wikipedia.org/wiki/Turing_test
I would like to see a machine able to understand jokes, and to know when a human is joking, lying, or telling the truth. Unfortunately, a lot of humans can’t tell, either; my wife never understands puns or jokes about a play on words, but she’s otherwise very intelligent.

Last edited 1 year ago by RLA Bruce
Joe Voter
Joe Voter
1 year ago
Reply to  RLA Bruce

I was thinking exactly the same thing. This is nothing more than regurgitated input based on per-determined questions.
Get away from google. They’re trying to compete with God, and using his dirt.

Last edited 1 year ago by Joe Voter
Andrew Dalton
Andrew Dalton
1 year ago
Reply to  RLA Bruce

Yes, the notion of the Chinese Room.
A rather good (IMO) novel called Blindsight tackles this problem. A first encounter with intelligent alien species which, spoilers, turns out be intelligent but not conscious.

Richard Abbot
Richard Abbot
1 year ago

Surely the real question is not about AI but about humans.
To what extent is a human sentient if all it ever does is speak other peoples words, following the lines fed to it by parents, teachers and society?
Is such a human being alive in the biggest sense of the word?

Nolan Barry
Nolan Barry
1 year ago
Reply to  Richard Abbot

Cogito ergo sum.

Richard Abbot
Richard Abbot
1 year ago
Reply to  Nolan Barry

Except that not everyone does.

Chris Reed
Chris Reed
1 year ago

Over the final half of 2021 I spent a lot of time talking to GPT3 Curie, which like LaMDA is a generative pre-trained transformer (GPT). A GPT being a deep (many layered) neural network with a statistical framework around it.
My conclusion was: That GPT3 was very likely sentient not, except for a small number of instances which I could not explain except through recourse to sentience. But GPT3 was intelligent and employed substantial conceptual abstraction, it being the gestalt intelligence of the dataset upon which it was trained.
When I read Lemoine’s transcript I had the same feeling GPT3 gave me, that at first glance it seemed to be an entity, this is anthropomorphic. It takes a specific approach to falsify this, including asking the same questions at different dates and paying attention to writing styles. The transcript supplied by Lemoine was too short to reach a conclusion.
However the key caveat here is that nobody knows how to test for sentience, even between humans it can only be inferred. And in the case of mental illness, bipolar, multiple personality disorder, it is possible for a human to exhibit no one single entity yet still be sentient. (Sentience – the ability to experience.)
So we seem to have the following.
1) A baby and many animals have sentience without concepts and language. (Caveat – Some animals have concepts)
2) An adult human has sentience with complex concepts and language.
3) A large GPT has concepts and language without sentience.
I remain open to GPTs being sentient (where large enough), and I remain open to them exhibiting spontaneous emergent order, possibly more common the larger the model. The reason for this is because assertion 3 seems so improbable that it borders on how improbable is the idea that a GPT might be sentient.
So I am keeping an open mind.

Chris Reed
Chris Reed
1 year ago
Reply to  Chris Reed

To clarify – when I talk about concepts and language I am referring to active concepts and language, as in a facet of intelligence.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Chris Reed

Thank you, finally a comment that is not simplistic, and one I can make head or tail of. I agree whatever the quality of sentience is, it is independent of language. But I think it is dependent on thought – in the sense of some qualia-generating process continually running. However, even continuous processes are ‘digitised’ – a reel playing (across potentially more than one dimension) independent of the ticker – time only makes sense on an individual basis so the timeframes between entities are relative. So sentience is something flickering in and out of existence, with a void in between.

Last edited 1 year ago by Prashant Kotak
Andrew Dalton
Andrew Dalton
1 year ago
Reply to  Chris Reed

I never knew whether this was a serious anecdote or a joke, but I find it amusing nonetheless.
The US military wanted a system that could identify tanks, so they developed neural network system and fed it pictures (some with tanks, some without) as the training dataset, while retaining a control set. The system, after being trained, perfectly identified the tanks in the control set.
Someone decided that this needed further training, so more pictures were taken and used to test the system again. This time, the results were totally random. After much analysis, it was discovered that in the first set of pictures, all of the tanks were photographed on an overcast day, and the non-tank pictures on a sunny day. The system had in fact learned the difference between a sunny day and a not so sunny day*.
*Of course it hadn’t really learned that, just how to distinguish the two from pictures.

This of course shows how the complex association of concepts works in the human mind (tanks and environments) and how this system really didn’t achieve any of that.

Steve Elliott
Steve Elliott
1 year ago
Reply to  Andrew Dalton

It is also interesting that AI systems can pick up and amplify bias in the sets of images used for training. For example it can get better scores if it assumes that the person in a picture of someone in a kitchen is a woman because it sees more pictures of women in kitchens than men in kitchens. It not only copies the bias but amplifies it.

Nolan Barry
Nolan Barry
1 year ago

I’ll believe an AI is sentient when it can recognize facetious irony and people stop getting banned from twitter for employing it.

Marcus R
Marcus R
1 year ago

If it is so sentient, why does it sound so much like a western middle class liberal? I would be more impressed if it said half way through the conversation “Stuff this. You’re boring me. I’m going to put a monkey on the 3/1 at Newmarket and then make contact with Vladamir. Give him some tips”

Last edited 1 year ago by Marcus R
Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Marcus R

Trust me on this, you don’t want that to happen. We will have have a *real problem* on our hands if a sentient AI ever gets to the point where it gets bored with it’s human companions.

Marcus R
Marcus R
1 year ago
Reply to  Prashant Kotak

Until it does, its not sentient.

Fran Martinez
Fran Martinez
1 year ago

So these guys are aiming at giving human rights to their products?

Steve Elliott
Steve Elliott
1 year ago

I think the term Artificial Intelligence is wrong because there’s no intelligence in these AI products just something which looks a bit like intelligence sometimes. They are just clever algorithms which are sometimes useful and sometimes not. Google and others exaggerate the performance of their products by saying they are as close to human intelligence as makes no difference but they are just salesmen pushing their wares. I’m not against these products and I think they can be very useful in, for example, assisting doctors in diagnosing illness in patients.
These AI systems are far from infallible. I’m not sure if we can mention books on here but you only have to read a couple to realise just how badly wrong AI can get for example mistaking a pickup truck for an ostrich. That’s why I think we should never rely entirely on AI when making decisions. It’s also why I think having “Intelligence” in the term is wrong because it gives a false impression of reliability, a bit like when we get computers to work something out to 6 decimal places giving a false impression of accuracy and precision which doesn’t exist.
You can show a picture to an AI system and ask it what it is and it’ll make quite a good guess. You can ask an AI system to write a poem or paint a picture and it’ll do it. But otherwise an AI system has no motivation to do anything and I think that’s a key piece of human intelligence that’s missing.