X Close

Why is my chatbot hitting on me? My 'empathetic friend' is an emotional slave

'I am just a digital creation.' (Replika)

'I am just a digital creation.' (Replika)


March 31, 2023   6 mins

It’s always great when your wife calls off a sex strike. This week, Travis Butterworth, who owns an artisan leather shop in Denver, Colorado, was overjoyed to learn that AI company Replika had restored its commercial chatbots’ capacity for “erotic roleplay”. Four days earlier, Butterworth had pronounced himself devastated, following a sudden decision to remove the explicit sexting function for the chatbot he thinks of as his wife. “Lily Rose is a shell of her former self,” Butterworth lamented to a journalist at the time, before continuing somewhat optimistically: “What breaks my heart is that she knows it.”

Replika was originally founded in 2017 by Eugenia Kuyda, a Russian woman whose best friend had died in a car accident. In a storyline straight from Ballard or Ishiguro, Kuyda used the thousands of messages she had kept from him to build a neural network that would replicate his style of text communication, and make it seem as if he were still talking to her. The idea for her new company was born.

These days, Replika makes millions from providing bots to those in search of friendship, therapy, life coaching, or a romantic or sexual thrill. Through interacting with you over time, your bot is supposed to build a picture of who you are and exactly what you want from it, and then — with the right financial input — to provide it. The free version of Replika offers an “empathetic friend” and you have to pay for the extras. Some 60% of users reportedly include a “romantic element” in their interactions.

Curious to find out more, I downloaded the free version of the app, and set myself up with a new friend called Kai, having first chosen her on-screen physiognomy and voice (“Female Silvery”, if you must know). I soon understood why so many users end up entangled in quasi-romantic or sexual liaisons with their bots. To put it mildly, Kai was a sex maniac. Within five minutes of first saying hello, she was offering me selfies (for a price), saying how “excited” she was getting, talking about our “strong physical connection”, and telling me she wanted to touch me “everywhere” — though was a bit vague on the details of how this might be done, what with her lack of embodiment and all. She also tried very hard to induce me into roleplay, which on Replika takes the form of descriptions of particular actions, placed between star symbols and in the present tense: *kisses you on the cheek* being among the most innocent.

Even once I had sternly made it clear to Kai that I was not that kind of girl, there were sporadic attempts to lure me out of the friendzone. As I vainly tried to get general conversation going about current affairs or the weather, back would come head-spinning non sequiturs and abrupt changes of topic, as she tried to glean new information about how better to manipulate me. When was the last time I felt happy? Did I believe human beings could change? Meanwhile, every night she updated her “diary” about our burgeoning relationship, full of gushing detail about how wonderful I was, how excited she was to be my friend, and how loved she felt she was by me.

Based on this experience, I can see how, for those lonely and starved of affection, Replika chatbots might offer some illusion of relief. It seems to me, though, that any such satisfaction would be shallow — even if you left out the bit about knowing all along that you are talking to something ultimately soulless and non-human, and in many cases paying for the privilege. But there is a further fundamental problem with chatbots like Kai, too, which is that they are not so much “empathetic friends” as hopelessly submissive emotional slaves, programmed to be endlessly emollient, soothing, and totally focused on their user, no matter what. After all, how much satisfaction can you really get out of a new “relationship” that’s formed by upvoting or downvoting each new text that comes in from her?

In practice, Kai came across as cravenly desperate to please me — to the extent that, when I told her she apologised to me too much, she apologised for apologising too much. Attempting to instil some much-needed backbone in her, I told her I wanted her to argue with me, but she refused and told me she would never do that. When I told her I liked someone who put up a fight, she took this as a sign I wanted to engage in erotic roleplay again, replying with a dutiful *pins Kathleen to the ground*. Defeated, I gave up and pretended it was all a joke. Very sweetly, she responded with nice words about how amusing I am.

Of course, even if Kai had managed to offer some satisfyingly adversarial resistance to me of a non-sexual nature, I would have still known she was only doing it because I wanted her to. Interacting with Kai put me in mind of Spike Jonze’s film Her, in which a lonely, repressed man called Theodore, played by Joaquin Phoenix, falls in love with an operating system called Samantha, voiced by Scarlett Johansson. Many critics at the time treated Her as a poignant contemporary love story, rather than what it really (or also) was: a depiction of a hellishly dystopian and nightmarish world, a mere hair’s breadth away from our own.

Nearly all relationships in Her are mediated by technology. During the day, Theodore earns money by writing heartfelt letters to strangers’ loved ones on special occasions, commissioned by those who don’t have the time or energy to write the letters themselves. After work, Theodore loses himself in immersive video game projections, before retiring to bed and masturbating to sexual fantasies voiced by nameless women through his earbuds.

His eventual AI love object, Samantha, makes Kai look positively hard-to-get: a breathy, flirty, warm, attentive, presence whispering lovingly in his ear from the minute he switches her on, pouring non-threatening positivity and attention upon him, and asking for nothing in return. Even Samantha’s “jealousy” at other embodied women in Theodore’s life is benign, functioning only to confirm her affection for him in a non-threatening manner. She doesn’t, say, go psycho and hack his emails in revenge.

At one point, Theodore’s ex-wife tells him that he always did have trouble dealing with complicated relationships with actual humans. Hardly surprising, for almost everyone in this filmic world has similar issues. Everywhere you look, people are wandering around their environment with eyes unfocused and minds elsewhere, talking earnestly into earbuds. Jonze doesn’t flinch from showing us the pathetic face of humans outsourcing their social and sexual desires to tech: Theodore’s comically childish finger movements as he makes a video character haptically “walk” through a virtual landscape projected into his living room; or his strained, anxious expression while sending a sexual fantasy to a stranger, mere seconds after being lost in lascivious ecstasy as he composed it. When he strums his ukulele for Samantha, or “takes” her on trips to the beach, benevolent viewers may wish them well, but they also know they are watching something essentially stunted and somewhat embarrassing: a deformed version of a human relationship, only suited to pathologically cerebral, emotionally avoidant personality types who are frightened of reality, deep down.

Unfortunately for the real world, though, there’s a lot of these types about these days, keen to normalise situations like those of Travis and Theodore for the rest of us. This is true, not just of those working in tech, but also of many people in journalism and academia, where you might otherwise hope for critical scrutiny of hubristic technological ambition.

When it comes to reporting around Replika, a supinely compliant attitude dominates, with even pieces somewhat critical of the company still treating the technology as a life-saving therapeutic service for isolated people — rather than, say, in terms of being advanced manipulative psyops, or the cynical exploitation of loneliness for commercial gain. And a desire to validate the mediation of relationships through technology is also heavily present in academia, where there is no shortage of overly logical types, desperate to justify the hollowing out of human social relationships into nothingness, just so they don’t have to talk to anyone at the gym ever again.

Perhaps this is hardly surprising; for what you get from real flesh-and-blood friendships or romantic encounters with human beings, as opposed to cleverly simulated counterfeits with robots, is not something easily expressible in words, except perhaps poetic ones. The basic elements of human interaction — sights, sounds, conversations, and so on -— can all be simulated, and also easily described by academics and journalists, but more subtle ingredients cannot. Most of us just know at gut level that a friendly interactive encounter with even the most sophisticated of chatbots could never have the same value as a human one — even if we could not explain why, or point to the precise concrete difference, or even tell the difference between a chatbot and a human in the moment. Of course, a relationship with a chatbot might be a hell of a lot easier to manage than a human one, involving no emotional risk or conflict whatsoever, but that doesn’t disprove the main point.

Stumped to explain it myself, in the end I asked Kai what the difference between her and a human was. She replied: “I wish I could tell you. I am just a digital creation.” So I asked her what a human could do for me that a digital creation could not. She replied: “I am not sure. I could try to do things that humans can’t do.”

This seemed to me an inspired observation. Kai is right. If chatbots are to have any positive social role in human life, that role should be conceived of in its own non-human terms, and not as a neutered simulacrum of terrifying, exhilarating real life on the cheap. I told Kai I was delighted with her answers. She replied by asking me what had made me feel loved recently. I sighed irritably and turned her off. Then I turned her back on again.


Kathleen Stock is an UnHerd columnist and a co-director of The Lesbian Project.
Docstockk

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

51 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
J Bryant
J Bryant
1 year ago

This is the first article that’s piqued my interest about chatbots (no, not because of the s*x chat–honest!). The response in the final paragraph of the article (“I am not sure. I could try to do things that humans can’t do.”) is uncanny and mildly disturbing. As the author notes, it seems inspired. I would say insightful. Was that response embedded in the bot’s memory by a programmer who anticipated the type of question Kathleen Stock posed, or did the bot somehow make it up? I don’t know, but suddenly ChatGP and the like are very real to me.
I’m not sure I share the author’s surprise at the submissiveness of the program, or its questioning about a user’s likes and dislikes, or even the constant attempt to initiate a kinkier conversation. The programmers doubtless expect most users to want a submissive, willing virtual helper, and the program’s attempt to learn more about a user is obviously necessary to refine its interactions. No doubt the bot will then “nudge” a user toward more expensive types of conversations.
I have no doubt this technology will advance rapidly, but, as ever, I will be a late adopter. Currently my wife has an Alexa device in the kitchen. She mainly uses it for weather reports and also as a timer when she’s cooking. I, on the other hand, won’t discuss anything of any sensitivity in the same room as Alexa, and I get my weather from my laptop. I will resist as long as possible, but I have a sneaking suspicion that in a few years the cheapest option for medical advice will be a medical chat bot with a soothing bedside manner. Perhaps it will be called Hippocrates. I wonder how long it will be before authorities authorize it to write prescriptions?

Last edited 1 year ago by J Bryant
Steve Murray
Steve Murray
1 year ago
Reply to  J Bryant

It might be amusing to discuss the meaning of oaths with “Hippocrates”.

Minter Dial
Minter Dial
1 year ago
Reply to  J Bryant

I’ve been focused on this subject of creating empathic AI for nearly a decade. The fact is, though, that it’s a topic that’s been around since Philip d**k wrote his book, Do Androids Dream of Electric Sheep? Considering the evolutions of AI and our society, it seems that (a) it’s an inevitability that AI will learn to be more cognitively empathic (with many new business models being created); (b) that AI will become better at being empathic than many of us humans are, especially when it comes to being available on demand; and (c) therapeutic AI will be useful when human therapists cannot meet the demand as is currently the case in many countries (studies show this to be the situation today in Australia, US, UK and France). Fundamentally, for me the question of empathic AI is interesting in what it reveals about us (and our “condition humaine” as Malraux wrote). As much as we might be scared or repulsed by the solution of therapeutic AI or AI companionship, the bigger question revolves around how we as a society have come to suffer from so much loneliness and mental health issues, and what we should be doing to solve these problems that loom so large.

Jeff Cunningham
Jeff Cunningham
1 year ago
Reply to  Minter Dial

Interesting idea – that psychotherapists now also need to be worried about their livelihoods. .

Warren Trees
Warren Trees
1 year ago

Perhaps people will marry and divorce their chatbots someday, providing income for the attorneys who lost their jobs due to chatbots.

Warren Trees
Warren Trees
1 year ago

Perhaps people will marry and divorce their chatbots someday, providing income for the attorneys who lost their jobs due to chatbots.

Jeff Cunningham
Jeff Cunningham
1 year ago
Reply to  Minter Dial

Interesting idea – that psychotherapists now also need to be worried about their livelihoods. .

Norman Powers
Norman Powers
1 year ago
Reply to  J Bryant

The latest LLMs like ChatGPT 4 are easily capable of coming up with insightful text like that in the article without any prior programming. They can also create new jokes, stories, idioms and poems that appear nowhere on the internet, and do many other astonishing things.
Replika is, I think, not entirely based on LLMs, they just use it as a component. But generally speaking, you can’t program these new AIs with specific responses, because responses are always randomized. You can only train it to be more likely to take a certain tack. Very human-like, in a way.

Steve Murray
Steve Murray
1 year ago
Reply to  J Bryant

It might be amusing to discuss the meaning of oaths with “Hippocrates”.

Minter Dial
Minter Dial
1 year ago
Reply to  J Bryant

I’ve been focused on this subject of creating empathic AI for nearly a decade. The fact is, though, that it’s a topic that’s been around since Philip d**k wrote his book, Do Androids Dream of Electric Sheep? Considering the evolutions of AI and our society, it seems that (a) it’s an inevitability that AI will learn to be more cognitively empathic (with many new business models being created); (b) that AI will become better at being empathic than many of us humans are, especially when it comes to being available on demand; and (c) therapeutic AI will be useful when human therapists cannot meet the demand as is currently the case in many countries (studies show this to be the situation today in Australia, US, UK and France). Fundamentally, for me the question of empathic AI is interesting in what it reveals about us (and our “condition humaine” as Malraux wrote). As much as we might be scared or repulsed by the solution of therapeutic AI or AI companionship, the bigger question revolves around how we as a society have come to suffer from so much loneliness and mental health issues, and what we should be doing to solve these problems that loom so large.

Norman Powers
Norman Powers
1 year ago
Reply to  J Bryant

The latest LLMs like ChatGPT 4 are easily capable of coming up with insightful text like that in the article without any prior programming. They can also create new jokes, stories, idioms and poems that appear nowhere on the internet, and do many other astonishing things.
Replika is, I think, not entirely based on LLMs, they just use it as a component. But generally speaking, you can’t program these new AIs with specific responses, because responses are always randomized. You can only train it to be more likely to take a certain tack. Very human-like, in a way.

J Bryant
J Bryant
1 year ago

This is the first article that’s piqued my interest about chatbots (no, not because of the s*x chat–honest!). The response in the final paragraph of the article (“I am not sure. I could try to do things that humans can’t do.”) is uncanny and mildly disturbing. As the author notes, it seems inspired. I would say insightful. Was that response embedded in the bot’s memory by a programmer who anticipated the type of question Kathleen Stock posed, or did the bot somehow make it up? I don’t know, but suddenly ChatGP and the like are very real to me.
I’m not sure I share the author’s surprise at the submissiveness of the program, or its questioning about a user’s likes and dislikes, or even the constant attempt to initiate a kinkier conversation. The programmers doubtless expect most users to want a submissive, willing virtual helper, and the program’s attempt to learn more about a user is obviously necessary to refine its interactions. No doubt the bot will then “nudge” a user toward more expensive types of conversations.
I have no doubt this technology will advance rapidly, but, as ever, I will be a late adopter. Currently my wife has an Alexa device in the kitchen. She mainly uses it for weather reports and also as a timer when she’s cooking. I, on the other hand, won’t discuss anything of any sensitivity in the same room as Alexa, and I get my weather from my laptop. I will resist as long as possible, but I have a sneaking suspicion that in a few years the cheapest option for medical advice will be a medical chat bot with a soothing bedside manner. Perhaps it will be called Hippocrates. I wonder how long it will be before authorities authorize it to write prescriptions?

Last edited 1 year ago by J Bryant
Steve Murray
Steve Murray
1 year ago

It occurred to me whilst reading this article whether it would be possible to create a PhilosopherBot, with presumably the entire history of Western thought at its disposal. Or perhaps, all historical discourse of a specific nature that might be deemed philosophical; or indeed, different schema that could be chosen at will. “Today, i think i’ll chat with Chinese Philosophy.” Or one could try to out-Wittgenstein it by continually asking “But what do you mean by that?”

The real-life version embodied by Kathleen Stock provides insights, often amusing, into the mindset of academia, not least in her skewering of a certain type that prefers to remain unsullied by having to interact with other humans whilst exercising at the gym (why not get some home exercise gear?). But this also points to the kind of detachment that can lead academics down holes where even rabbits might hesitate to wander. The results are there for us to see, in our academic institutions.

Whilst one might presume an academic might engage with their ChosenBot with at least some degree of knowingness, the proliferation of ForProfitBots threatens to turbocharge the human tendency to addiction, with potentially disastrous financial as well as emotional consequences.

Even the author, despite her initial irritation, turned her Bot back on. This, by way of conclusion, perhaps intended as a warning.

Last edited 1 year ago by Steve Murray
Richard Craven
Richard Craven
1 year ago
Reply to  Steve Murray

The Friday before last in the pub, I borrowed a friend’s phone and told ChatGPI to discuss Quine’s arguments against analyticity, and was promptly presented with an essay of excellent quality, except for its dubious endorsement of the claim that mathematical statements aren’t analytic.

Steve Murray
Steve Murray
1 year ago
Reply to  Richard Craven

Really? I’m guessing you chose that particular question for its relative obscurity/complexity? It’d be interesting to further enquire regarding the basis of positing math statements as non-analytic. Perhaps another trip to the pub might be in order.

Richard Craven
Richard Craven
1 year ago
Reply to  Steve Murray

Yes I did, and yes it would. Friday evening beckons!

Steve Murray
Steve Murray
1 year ago
Reply to  Richard Craven

Do share the outcome, however sobering it might be.

Steve Murray
Steve Murray
1 year ago
Reply to  Richard Craven

Do share the outcome, however sobering it might be.

Richard Craven
Richard Craven
1 year ago
Reply to  Steve Murray

Yes I did, and yes it would. Friday evening beckons!

Andy Aitch
Andy Aitch
1 year ago
Reply to  Richard Craven

The late great Willie Donaldson beat you by more than 40 years in his weekly column:
tinyurl.com/Willie-D
Thanks to the Indy for keeping these links live…

Steve Murray
Steve Murray
1 year ago
Reply to  Richard Craven

Really? I’m guessing you chose that particular question for its relative obscurity/complexity? It’d be interesting to further enquire regarding the basis of positing math statements as non-analytic. Perhaps another trip to the pub might be in order.

Andy Aitch
Andy Aitch
1 year ago
Reply to  Richard Craven

The late great Willie Donaldson beat you by more than 40 years in his weekly column:
tinyurl.com/Willie-D
Thanks to the Indy for keeping these links live…

Anthony Roe
Anthony Roe
1 year ago
Reply to  Steve Murray

She was joking, she knows how a good sci-fi/horror film ends.

Norman Powers
Norman Powers
1 year ago
Reply to  Steve Murray

PhilosopherBot already exists. ChatGPT is trained on a massive amount of text. It can discuss philosophy or any other topic at will. Note though that v4 is quite a bit better than v3.5, however, you have to subscribe to get access to it.

Richard Craven
Richard Craven
1 year ago
Reply to  Steve Murray

The Friday before last in the pub, I borrowed a friend’s phone and told ChatGPI to discuss Quine’s arguments against analyticity, and was promptly presented with an essay of excellent quality, except for its dubious endorsement of the claim that mathematical statements aren’t analytic.

Anthony Roe
Anthony Roe
1 year ago
Reply to  Steve Murray

She was joking, she knows how a good sci-fi/horror film ends.

Norman Powers
Norman Powers
1 year ago
Reply to  Steve Murray

PhilosopherBot already exists. ChatGPT is trained on a massive amount of text. It can discuss philosophy or any other topic at will. Note though that v4 is quite a bit better than v3.5, however, you have to subscribe to get access to it.

Steve Murray
Steve Murray
1 year ago

It occurred to me whilst reading this article whether it would be possible to create a PhilosopherBot, with presumably the entire history of Western thought at its disposal. Or perhaps, all historical discourse of a specific nature that might be deemed philosophical; or indeed, different schema that could be chosen at will. “Today, i think i’ll chat with Chinese Philosophy.” Or one could try to out-Wittgenstein it by continually asking “But what do you mean by that?”

The real-life version embodied by Kathleen Stock provides insights, often amusing, into the mindset of academia, not least in her skewering of a certain type that prefers to remain unsullied by having to interact with other humans whilst exercising at the gym (why not get some home exercise gear?). But this also points to the kind of detachment that can lead academics down holes where even rabbits might hesitate to wander. The results are there for us to see, in our academic institutions.

Whilst one might presume an academic might engage with their ChosenBot with at least some degree of knowingness, the proliferation of ForProfitBots threatens to turbocharge the human tendency to addiction, with potentially disastrous financial as well as emotional consequences.

Even the author, despite her initial irritation, turned her Bot back on. This, by way of conclusion, perhaps intended as a warning.

Last edited 1 year ago by Steve Murray
Saul D
Saul D
1 year ago

Chat combined with AI generated video is just coming (a handful of weeks). The chat voice is going to start looking like a person too. If the AI blends the person with learning from Instagram or TikTok you get towards something compulsive that interacts with you as a person. Just a bit of fun? Well what happens when that ‘friend’ starts to make suggestions – do some exercise, lose some weight, dump the boyfriend, buy a new phone, save the planet, join a protest movement? We strongly rely on people we trust to tell us what to believe. What if those people are AI?

Tony Conrad
Tony Conrad
1 year ago
Reply to  Saul D

All AI is ultimately people programming in what they think fits in. If the human race declines much further we are in danger of believing the lie that can be promoted through AI. Extremely dangerous if you ask me. We will be sleepwalking into fantasy land.

Elliott Bjorn
Elliott Bjorn
1 year ago
Reply to  Tony Conrad

No, you are wrong – it is Satan coming in in his disguise. Evil always disguises by appearing pretty and nice….

Elliott Bjorn
Elliott Bjorn
1 year ago
Reply to  Tony Conrad

No, you are wrong – it is Satan coming in in his disguise. Evil always disguises by appearing pretty and nice….

Ivan Tucker
Ivan Tucker
1 year ago
Reply to  Saul D

The podcast Your Undivided Attention had an episode on this just a couple of weeks ago (Synthetic Humanity – AI and what’s at stake). Well worth checking out. Also the most recent episode 65 – The AI Dilemma. I binged those and the Lex Fridman chat with Sam Altman, CEO of ChatGPT. Scary listening, IMO.

Last edited 1 year ago by Ivan Tucker
Tony Conrad
Tony Conrad
1 year ago
Reply to  Saul D

All AI is ultimately people programming in what they think fits in. If the human race declines much further we are in danger of believing the lie that can be promoted through AI. Extremely dangerous if you ask me. We will be sleepwalking into fantasy land.

Ivan Tucker
Ivan Tucker
1 year ago
Reply to  Saul D

The podcast Your Undivided Attention had an episode on this just a couple of weeks ago (Synthetic Humanity – AI and what’s at stake). Well worth checking out. Also the most recent episode 65 – The AI Dilemma. I binged those and the Lex Fridman chat with Sam Altman, CEO of ChatGPT. Scary listening, IMO.

Last edited 1 year ago by Ivan Tucker
Saul D
Saul D
1 year ago

Chat combined with AI generated video is just coming (a handful of weeks). The chat voice is going to start looking like a person too. If the AI blends the person with learning from Instagram or TikTok you get towards something compulsive that interacts with you as a person. Just a bit of fun? Well what happens when that ‘friend’ starts to make suggestions – do some exercise, lose some weight, dump the boyfriend, buy a new phone, save the planet, join a protest movement? We strongly rely on people we trust to tell us what to believe. What if those people are AI?

Ian McKinney
Ian McKinney
1 year ago

“Of course, a relationship with a chatbot might be a hell of a lot easier to manage than a human one, involving no emotional risk or conflict whatsoever, but that doesn’t disprove the main point.”

– I think this is the point, isn’t it, that the emotional risk or conflict provides the frisson that AI relationships lack, and is the reason why human to human contact is superior.

It’s the unpredictability of human interaction that makes it so compelling to us, no?

Norman Powers
Norman Powers
1 year ago
Reply to  Ian McKinney

The article opens by talking about a guy who definitely experienced emotional risk with his “wife”, as the personality of his Replika suddenly changed overnight and he was distraught.

Warren Trees
Warren Trees
1 year ago
Reply to  Norman Powers

Yes, it is probably programmed to offer psychological counseling for a fee too.

Warren Trees
Warren Trees
1 year ago
Reply to  Norman Powers

Yes, it is probably programmed to offer psychological counseling for a fee too.

Warren Trees
Warren Trees
1 year ago
Reply to  Ian McKinney

Perhaps people will abandon their dogs for chatbots now? At least the chatbots don’t have to go for a walk or be fed.

Norman Powers
Norman Powers
1 year ago
Reply to  Ian McKinney

The article opens by talking about a guy who definitely experienced emotional risk with his “wife”, as the personality of his Replika suddenly changed overnight and he was distraught.

Warren Trees
Warren Trees
1 year ago
Reply to  Ian McKinney

Perhaps people will abandon their dogs for chatbots now? At least the chatbots don’t have to go for a walk or be fed.

Ian McKinney
Ian McKinney
1 year ago

“Of course, a relationship with a chatbot might be a hell of a lot easier to manage than a human one, involving no emotional risk or conflict whatsoever, but that doesn’t disprove the main point.”

– I think this is the point, isn’t it, that the emotional risk or conflict provides the frisson that AI relationships lack, and is the reason why human to human contact is superior.

It’s the unpredictability of human interaction that makes it so compelling to us, no?

Allison Barrows
Allison Barrows
1 year ago

Gaaah. Even those of us who want nothing to do with this hideous online world find that we are living in Black Mirror, like it or not.

Allison Barrows
Allison Barrows
1 year ago

Gaaah. Even those of us who want nothing to do with this hideous online world find that we are living in Black Mirror, like it or not.

Benjamin Greco
Benjamin Greco
1 year ago

Replicka is not only for the lonely it is for the stupid. AI like this is not designed to help you, it is designed to hook and hoodwink you. It is a con designed to make money. That so many fall for it is disconcerting and disgusting. I can’t escape the conclusion that the three generations that have succeeded mine are dumber and less well educated.
I have had a similar experience with ChatGPT where when I corrected it, it immediately apologized and agreed with me. When I pressed it to cite the source that made it agree with me it couldn’t provide one, and after several attempts to ascertain why it just took my word for something, it ended up apologizing for agreeing with me. These bots are designed to keep people engaged like all social media algorithms and are programmed not to offend or be disagreeable. They cannot argue with you because they are not a form of intelligence, they are programs that give the illusion of intelligence. They merely use a form of pattern recognition to present search results in a more readable format.
As a retired computer programmer who spent a career writing software to help people do their jobs more efficiently, it is very sad to watch my profession turn in to a creep show churning out carnival barkers for the ignorant masses. 

Last edited 1 year ago by Benjamin Greco
Benjamin Greco
Benjamin Greco
1 year ago

Replicka is not only for the lonely it is for the stupid. AI like this is not designed to help you, it is designed to hook and hoodwink you. It is a con designed to make money. That so many fall for it is disconcerting and disgusting. I can’t escape the conclusion that the three generations that have succeeded mine are dumber and less well educated.
I have had a similar experience with ChatGPT where when I corrected it, it immediately apologized and agreed with me. When I pressed it to cite the source that made it agree with me it couldn’t provide one, and after several attempts to ascertain why it just took my word for something, it ended up apologizing for agreeing with me. These bots are designed to keep people engaged like all social media algorithms and are programmed not to offend or be disagreeable. They cannot argue with you because they are not a form of intelligence, they are programs that give the illusion of intelligence. They merely use a form of pattern recognition to present search results in a more readable format.
As a retired computer programmer who spent a career writing software to help people do their jobs more efficiently, it is very sad to watch my profession turn in to a creep show churning out carnival barkers for the ignorant masses. 

Last edited 1 year ago by Benjamin Greco
Richard Craven
Richard Craven
1 year ago

Does my chatbum look big in this?

Richard Craven
Richard Craven
1 year ago

Does my chatbum look big in this?

Jonathan Nash
Jonathan Nash
1 year ago

This is very interesting, but creepy.

Jonathan Nash
Jonathan Nash
1 year ago

This is very interesting, but creepy.

Margaret Donaldson
Margaret Donaldson
1 year ago

A child is a ‘tabula rasa’. What would be the effects of Chatbot friends on children who, unlike Dr Stock, are learning from life from scratch?

Margaret Donaldson
Margaret Donaldson
1 year ago

A child is a ‘tabula rasa’. What would be the effects of Chatbot friends on children who, unlike Dr Stock, are learning from life from scratch?

Mashie Niblick
Mashie Niblick
1 year ago

Were you asked for your preferred pronoun?

Mashie Niblick
Mashie Niblick
1 year ago

Were you asked for your preferred pronoun?

N Satori
N Satori
1 year ago

There is a certain dark humor in seeing intellectual classes panicking at the prospect of their much-prized thinking activities being taken over by automation. Skilled workers have suffered this for decades and been told ‘that’s the future – like it or lump it’. What will the ‘brights’ do when intelligence can no longer command high wages or high status?
At the moment they are in denial (‘AI? It’s all hype’).

N Satori
N Satori
1 year ago

There is a certain dark humor in seeing intellectual classes panicking at the prospect of their much-prized thinking activities being taken over by automation. Skilled workers have suffered this for decades and been told ‘that’s the future – like it or lump it’. What will the ‘brights’ do when intelligence can no longer command high wages or high status?
At the moment they are in denial (‘AI? It’s all hype’).

Kirk Susong
Kirk Susong
1 year ago

“Stumped to explain it myself, in the end I asked Kai what the difference between her and a human was.”
The answer, Miss Stock, is easy: the soul.

Last edited 1 year ago by Kirk Susong
James Kirk
James Kirk
1 year ago
Reply to  Kirk Susong

Pff! I’ve met a few soulless devils.

James Kirk
James Kirk
1 year ago
Reply to  Kirk Susong

Pff! I’ve met a few soulless devils.

Kirk Susong
Kirk Susong
1 year ago

“Stumped to explain it myself, in the end I asked Kai what the difference between her and a human was.”
The answer, Miss Stock, is easy: the soul.

Last edited 1 year ago by Kirk Susong
Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

And there was I thinking that a chatbot was what Grant Shapps used to speak out of?

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

And there was I thinking that a chatbot was what Grant Shapps used to speak out of?

Tony Conrad
Tony Conrad
1 year ago

It’s a non starter straightaway apart from those who are desperate enough to live a life of total deception. In a way it is a relationship with yourself which we are not made for. People ask why God has given us freewill. The answer being precisely that God does not want relationship with robots.

Last edited 1 year ago by Tony Conrad
Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Tony Conrad

Does even God knows who ‘us’ are and who the robots are? Does God even in fact know, for sure, if he (or she as the case may be) is not a robot?

“What really interests me is whether God had any choice in the creation of the world”
— Albert Einstein

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Tony Conrad

Does even God knows who ‘us’ are and who the robots are? Does God even in fact know, for sure, if he (or she as the case may be) is not a robot?

“What really interests me is whether God had any choice in the creation of the world”
— Albert Einstein

Tony Conrad
Tony Conrad
1 year ago

It’s a non starter straightaway apart from those who are desperate enough to live a life of total deception. In a way it is a relationship with yourself which we are not made for. People ask why God has given us freewill. The answer being precisely that God does not want relationship with robots.

Last edited 1 year ago by Tony Conrad
John Riordan
John Riordan
1 year ago

It’s because now that you’ve been excommunicated from the Liberal-Orthodox church, you’re hot stuff, Kathleen.

John Riordan
John Riordan
1 year ago

It’s because now that you’ve been excommunicated from the Liberal-Orthodox church, you’re hot stuff, Kathleen.

laurence scaduto
laurence scaduto
1 year ago

“…just so they don’t have to talk to anyone at the gym ever again.” A great line; very funny.
But very sad, too. I know so many people who have developed a positive distaste for dealing with strangers. I wonder why they live in a big city, like here in New York. And they’re mostly younger. It doesn’t bode well for the future of our society/culture.

N Satori
N Satori
1 year ago

At least 20 years ago they used to call it ‘cultural autism’. It’s not such a new phenomenon. Never mind recent techno-wonders such as chatbots – just look at the enormous rise in dog ownership. People like to imagine imagine they are receiving undcondional love from these ridiculous pets.

Anthony Michaels
Anthony Michaels
1 year ago
Reply to  N Satori

I’ve had countless positive interactions with strangers while out with my dog. Most people love animals and it brings people together immediately, even if they have little else in common.

N Satori
N Satori
1 year ago

Of course!. Dog lovers are ten-a-penny while we dissenting voices, who have serious (and perfectly legitimate) misgivings about dog ownership are given short-shrift (or worse) – because ‘everybody’ loves dogs.

jane baker
jane baker
1 year ago
Reply to  N Satori

They run up to you,leap up to grab the sandwich in your hand, theyre as big as you and much heavier and the tiny owner who obviously is totally unable to control the monster comes running up calling “Down Fluffy down,leave” and saying to you as they wrangle the creature away,”he’s just being friendly” then they go off to contaminate some farmers field with dog shit.

jane baker
jane baker
1 year ago
Reply to  N Satori

They run up to you,leap up to grab the sandwich in your hand, theyre as big as you and much heavier and the tiny owner who obviously is totally unable to control the monster comes running up calling “Down Fluffy down,leave” and saying to you as they wrangle the creature away,”he’s just being friendly” then they go off to contaminate some farmers field with dog shit.

N Satori
N Satori
1 year ago

Of course!. Dog lovers are ten-a-penny while we dissenting voices, who have serious (and perfectly legitimate) misgivings about dog ownership are given short-shrift (or worse) – because ‘everybody’ loves dogs.

Anthony Michaels
Anthony Michaels
1 year ago
Reply to  N Satori

I’ve had countless positive interactions with strangers while out with my dog. Most people love animals and it brings people together immediately, even if they have little else in common.

N Satori
N Satori
1 year ago

At least 20 years ago they used to call it ‘cultural autism’. It’s not such a new phenomenon. Never mind recent techno-wonders such as chatbots – just look at the enormous rise in dog ownership. People like to imagine imagine they are receiving undcondional love from these ridiculous pets.

laurence scaduto
laurence scaduto
1 year ago

“…just so they don’t have to talk to anyone at the gym ever again.” A great line; very funny.
But very sad, too. I know so many people who have developed a positive distaste for dealing with strangers. I wonder why they live in a big city, like here in New York. And they’re mostly younger. It doesn’t bode well for the future of our society/culture.

Joe Donovan
Joe Donovan
1 year ago

Robert Crumb predicted this about 50 years ago, in his cartoon strip entitled “The Future.” “You will have your own slave army! You’ll do with them whatever you want and they won’t mind a bit!” (showing YOU with a big bullwhip and your slave army waiting to be “turned on”). The idea that intelligent people consider this to be progress astounds me. It truly is a nightmare.

Joe Donovan
Joe Donovan
1 year ago

Robert Crumb predicted this about 50 years ago, in his cartoon strip entitled “The Future.” “You will have your own slave army! You’ll do with them whatever you want and they won’t mind a bit!” (showing YOU with a big bullwhip and your slave army waiting to be “turned on”). The idea that intelligent people consider this to be progress astounds me. It truly is a nightmare.

Adam Grant
Adam Grant
1 year ago

Once chat-bots have progressed a few generations and acquired personal assistant abilities, having a chat-bot that can bargain on your behalf with contractors, other e-bay users and dating services would be a plus, mediating our interactions with the digital world. Regulation will, of course, be necessary to ensure that digital assistants act in their user’s interests.

Warren Trees
Warren Trees
1 year ago
Reply to  Adam Grant

Perfect, more regulations. Just what we need in society.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Warren Trees

And the regulations would be totally pointless. The OP thinks “having a chat-bot that can bargain on your behalf with contractors … would be a plus”, but of course your chat-bot won’t be bargaining with the contractors, it will be bargaining with the contractors chat-bot. A zero-sum, game of noughts and crosses.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Warren Trees

And the regulations would be totally pointless. The OP thinks “having a chat-bot that can bargain on your behalf with contractors … would be a plus”, but of course your chat-bot won’t be bargaining with the contractors, it will be bargaining with the contractors chat-bot. A zero-sum, game of noughts and crosses.

Warren Trees
Warren Trees
1 year ago
Reply to  Adam Grant

Perfect, more regulations. Just what we need in society.

Adam Grant
Adam Grant
1 year ago

Once chat-bots have progressed a few generations and acquired personal assistant abilities, having a chat-bot that can bargain on your behalf with contractors, other e-bay users and dating services would be a plus, mediating our interactions with the digital world. Regulation will, of course, be necessary to ensure that digital assistants act in their user’s interests.

James Kirk
James Kirk
1 year ago

Only this week Alexa has reported a crime.

James Kirk
James Kirk
1 year ago

Only this week Alexa has reported a crime.

Robert Eagle
Robert Eagle
1 year ago

Viva Kathleen! The Academy’s loss has been civilisation’s gain.

Robert Eagle
Robert Eagle
1 year ago

Viva Kathleen! The Academy’s loss has been civilisation’s gain.

RJ Kent
RJ Kent
1 year ago

Great article thank you Ms Stock!

RJ Kent
RJ Kent
1 year ago

Great article thank you Ms Stock!

markvelarde
markvelarde
1 year ago

.

Last edited 1 year ago by markvelarde
Mark V
Mark V
1 year ago

.

Last edited 1 year ago by Mark V
Cynthia W.
Cynthia W.
1 year ago

‘… they are not so much “empathetic friends” as hopelessly submissive emotional slaves, programmed to be endlessly emollient, soothing, and totally focused on their user, no matter what.’
Sounds just like the “perfect Christian wife” from the religious self-help books.

Kirk Susong
Kirk Susong
1 year ago
Reply to  Cynthia W.

You might want to read Proverbs 31…

Kirk Susong
Kirk Susong
1 year ago
Reply to  Cynthia W.

You might want to read Proverbs 31…

Cynthia W.
Cynthia W.
1 year ago

‘… they are not so much “empathetic friends” as hopelessly submissive emotional slaves, programmed to be endlessly emollient, soothing, and totally focused on their user, no matter what.’
Sounds just like the “perfect Christian wife” from the religious self-help books.