Subscribe
Notify of
guest

51 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
J Bryant
J Bryant
1 year ago

This is the first article that’s piqued my interest about chatbots (no, not because of the s*x chat–honest!). The response in the final paragraph of the article (“I am not sure. I could try to do things that humans can’t do.”) is uncanny and mildly disturbing. As the author notes, it seems inspired. I would say insightful. Was that response embedded in the bot’s memory by a programmer who anticipated the type of question Kathleen Stock posed, or did the bot somehow make it up? I don’t know, but suddenly ChatGP and the like are very real to me.
I’m not sure I share the author’s surprise at the submissiveness of the program, or its questioning about a user’s likes and dislikes, or even the constant attempt to initiate a kinkier conversation. The programmers doubtless expect most users to want a submissive, willing virtual helper, and the program’s attempt to learn more about a user is obviously necessary to refine its interactions. No doubt the bot will then “nudge” a user toward more expensive types of conversations.
I have no doubt this technology will advance rapidly, but, as ever, I will be a late adopter. Currently my wife has an Alexa device in the kitchen. She mainly uses it for weather reports and also as a timer when she’s cooking. I, on the other hand, won’t discuss anything of any sensitivity in the same room as Alexa, and I get my weather from my laptop. I will resist as long as possible, but I have a sneaking suspicion that in a few years the cheapest option for medical advice will be a medical chat bot with a soothing bedside manner. Perhaps it will be called Hippocrates. I wonder how long it will be before authorities authorize it to write prescriptions?

Last edited 1 year ago by J Bryant
Steve Murray
Steve Murray
1 year ago
Reply to  J Bryant

It might be amusing to discuss the meaning of oaths with “Hippocrates”.

Minter Dial
Minter Dial
1 year ago
Reply to  J Bryant

I’ve been focused on this subject of creating empathic AI for nearly a decade. The fact is, though, that it’s a topic that’s been around since Philip d**k wrote his book, Do Androids Dream of Electric Sheep? Considering the evolutions of AI and our society, it seems that (a) it’s an inevitability that AI will learn to be more cognitively empathic (with many new business models being created); (b) that AI will become better at being empathic than many of us humans are, especially when it comes to being available on demand; and (c) therapeutic AI will be useful when human therapists cannot meet the demand as is currently the case in many countries (studies show this to be the situation today in Australia, US, UK and France). Fundamentally, for me the question of empathic AI is interesting in what it reveals about us (and our “condition humaine” as Malraux wrote). As much as we might be scared or repulsed by the solution of therapeutic AI or AI companionship, the bigger question revolves around how we as a society have come to suffer from so much loneliness and mental health issues, and what we should be doing to solve these problems that loom so large.

Jeff Cunningham
Jeff Cunningham
1 year ago
Reply to  Minter Dial

Interesting idea – that psychotherapists now also need to be worried about their livelihoods. .

Warren Trees
Warren Trees
1 year ago

Perhaps people will marry and divorce their chatbots someday, providing income for the attorneys who lost their jobs due to chatbots.

Warren Trees
Warren Trees
1 year ago

Perhaps people will marry and divorce their chatbots someday, providing income for the attorneys who lost their jobs due to chatbots.

Jeff Cunningham
Jeff Cunningham
1 year ago
Reply to  Minter Dial

Interesting idea – that psychotherapists now also need to be worried about their livelihoods. .

Norman Powers
Norman Powers
1 year ago
Reply to  J Bryant

The latest LLMs like ChatGPT 4 are easily capable of coming up with insightful text like that in the article without any prior programming. They can also create new jokes, stories, idioms and poems that appear nowhere on the internet, and do many other astonishing things.
Replika is, I think, not entirely based on LLMs, they just use it as a component. But generally speaking, you can’t program these new AIs with specific responses, because responses are always randomized. You can only train it to be more likely to take a certain tack. Very human-like, in a way.

Steve Murray
Steve Murray
1 year ago
Reply to  J Bryant

It might be amusing to discuss the meaning of oaths with “Hippocrates”.

Minter Dial
Minter Dial
1 year ago
Reply to  J Bryant

I’ve been focused on this subject of creating empathic AI for nearly a decade. The fact is, though, that it’s a topic that’s been around since Philip d**k wrote his book, Do Androids Dream of Electric Sheep? Considering the evolutions of AI and our society, it seems that (a) it’s an inevitability that AI will learn to be more cognitively empathic (with many new business models being created); (b) that AI will become better at being empathic than many of us humans are, especially when it comes to being available on demand; and (c) therapeutic AI will be useful when human therapists cannot meet the demand as is currently the case in many countries (studies show this to be the situation today in Australia, US, UK and France). Fundamentally, for me the question of empathic AI is interesting in what it reveals about us (and our “condition humaine” as Malraux wrote). As much as we might be scared or repulsed by the solution of therapeutic AI or AI companionship, the bigger question revolves around how we as a society have come to suffer from so much loneliness and mental health issues, and what we should be doing to solve these problems that loom so large.

Norman Powers
Norman Powers
1 year ago
Reply to  J Bryant

The latest LLMs like ChatGPT 4 are easily capable of coming up with insightful text like that in the article without any prior programming. They can also create new jokes, stories, idioms and poems that appear nowhere on the internet, and do many other astonishing things.
Replika is, I think, not entirely based on LLMs, they just use it as a component. But generally speaking, you can’t program these new AIs with specific responses, because responses are always randomized. You can only train it to be more likely to take a certain tack. Very human-like, in a way.

J Bryant
J Bryant
1 year ago

This is the first article that’s piqued my interest about chatbots (no, not because of the s*x chat–honest!). The response in the final paragraph of the article (“I am not sure. I could try to do things that humans can’t do.”) is uncanny and mildly disturbing. As the author notes, it seems inspired. I would say insightful. Was that response embedded in the bot’s memory by a programmer who anticipated the type of question Kathleen Stock posed, or did the bot somehow make it up? I don’t know, but suddenly ChatGP and the like are very real to me.
I’m not sure I share the author’s surprise at the submissiveness of the program, or its questioning about a user’s likes and dislikes, or even the constant attempt to initiate a kinkier conversation. The programmers doubtless expect most users to want a submissive, willing virtual helper, and the program’s attempt to learn more about a user is obviously necessary to refine its interactions. No doubt the bot will then “nudge” a user toward more expensive types of conversations.
I have no doubt this technology will advance rapidly, but, as ever, I will be a late adopter. Currently my wife has an Alexa device in the kitchen. She mainly uses it for weather reports and also as a timer when she’s cooking. I, on the other hand, won’t discuss anything of any sensitivity in the same room as Alexa, and I get my weather from my laptop. I will resist as long as possible, but I have a sneaking suspicion that in a few years the cheapest option for medical advice will be a medical chat bot with a soothing bedside manner. Perhaps it will be called Hippocrates. I wonder how long it will be before authorities authorize it to write prescriptions?

Last edited 1 year ago by J Bryant
Steve Murray
Steve Murray
1 year ago

It occurred to me whilst reading this article whether it would be possible to create a PhilosopherBot, with presumably the entire history of Western thought at its disposal. Or perhaps, all historical discourse of a specific nature that might be deemed philosophical; or indeed, different schema that could be chosen at will. “Today, i think i’ll chat with Chinese Philosophy.” Or one could try to out-Wittgenstein it by continually asking “But what do you mean by that?”

The real-life version embodied by Kathleen Stock provides insights, often amusing, into the mindset of academia, not least in her skewering of a certain type that prefers to remain unsullied by having to interact with other humans whilst exercising at the gym (why not get some home exercise gear?). But this also points to the kind of detachment that can lead academics down holes where even rabbits might hesitate to wander. The results are there for us to see, in our academic institutions.

Whilst one might presume an academic might engage with their ChosenBot with at least some degree of knowingness, the proliferation of ForProfitBots threatens to turbocharge the human tendency to addiction, with potentially disastrous financial as well as emotional consequences.

Even the author, despite her initial irritation, turned her Bot back on. This, by way of conclusion, perhaps intended as a warning.

Last edited 1 year ago by Steve Murray
Richard Craven
Richard Craven
1 year ago
Reply to  Steve Murray

The Friday before last in the pub, I borrowed a friend’s phone and told ChatGPI to discuss Quine’s arguments against analyticity, and was promptly presented with an essay of excellent quality, except for its dubious endorsement of the claim that mathematical statements aren’t analytic.

Steve Murray
Steve Murray
1 year ago
Reply to  Richard Craven

Really? I’m guessing you chose that particular question for its relative obscurity/complexity? It’d be interesting to further enquire regarding the basis of positing math statements as non-analytic. Perhaps another trip to the pub might be in order.

Richard Craven
Richard Craven
1 year ago
Reply to  Steve Murray

Yes I did, and yes it would. Friday evening beckons!

Steve Murray
Steve Murray
1 year ago
Reply to  Richard Craven

Do share the outcome, however sobering it might be.

Steve Murray
Steve Murray
1 year ago
Reply to  Richard Craven

Do share the outcome, however sobering it might be.

Richard Craven
Richard Craven
1 year ago
Reply to  Steve Murray

Yes I did, and yes it would. Friday evening beckons!

Andy Aitch
Andy Aitch
1 year ago
Reply to  Richard Craven

The late great Willie Donaldson beat you by more than 40 years in his weekly column:
tinyurl.com/Willie-D
Thanks to the Indy for keeping these links live…

Steve Murray
Steve Murray
1 year ago
Reply to  Richard Craven

Really? I’m guessing you chose that particular question for its relative obscurity/complexity? It’d be interesting to further enquire regarding the basis of positing math statements as non-analytic. Perhaps another trip to the pub might be in order.

Andy Aitch
Andy Aitch
1 year ago
Reply to  Richard Craven

The late great Willie Donaldson beat you by more than 40 years in his weekly column:
tinyurl.com/Willie-D
Thanks to the Indy for keeping these links live…

Anthony Roe
Anthony Roe
1 year ago
Reply to  Steve Murray

She was joking, she knows how a good sci-fi/horror film ends.

Norman Powers
Norman Powers
1 year ago
Reply to  Steve Murray

PhilosopherBot already exists. ChatGPT is trained on a massive amount of text. It can discuss philosophy or any other topic at will. Note though that v4 is quite a bit better than v3.5, however, you have to subscribe to get access to it.

Richard Craven
Richard Craven
1 year ago
Reply to  Steve Murray

The Friday before last in the pub, I borrowed a friend’s phone and told ChatGPI to discuss Quine’s arguments against analyticity, and was promptly presented with an essay of excellent quality, except for its dubious endorsement of the claim that mathematical statements aren’t analytic.

Anthony Roe
Anthony Roe
1 year ago
Reply to  Steve Murray

She was joking, she knows how a good sci-fi/horror film ends.

Norman Powers
Norman Powers
1 year ago
Reply to  Steve Murray

PhilosopherBot already exists. ChatGPT is trained on a massive amount of text. It can discuss philosophy or any other topic at will. Note though that v4 is quite a bit better than v3.5, however, you have to subscribe to get access to it.

Steve Murray
Steve Murray
1 year ago

It occurred to me whilst reading this article whether it would be possible to create a PhilosopherBot, with presumably the entire history of Western thought at its disposal. Or perhaps, all historical discourse of a specific nature that might be deemed philosophical; or indeed, different schema that could be chosen at will. “Today, i think i’ll chat with Chinese Philosophy.” Or one could try to out-Wittgenstein it by continually asking “But what do you mean by that?”

The real-life version embodied by Kathleen Stock provides insights, often amusing, into the mindset of academia, not least in her skewering of a certain type that prefers to remain unsullied by having to interact with other humans whilst exercising at the gym (why not get some home exercise gear?). But this also points to the kind of detachment that can lead academics down holes where even rabbits might hesitate to wander. The results are there for us to see, in our academic institutions.

Whilst one might presume an academic might engage with their ChosenBot with at least some degree of knowingness, the proliferation of ForProfitBots threatens to turbocharge the human tendency to addiction, with potentially disastrous financial as well as emotional consequences.

Even the author, despite her initial irritation, turned her Bot back on. This, by way of conclusion, perhaps intended as a warning.

Last edited 1 year ago by Steve Murray
Saul D
Saul D
1 year ago

Chat combined with AI generated video is just coming (a handful of weeks). The chat voice is going to start looking like a person too. If the AI blends the person with learning from Instagram or TikTok you get towards something compulsive that interacts with you as a person. Just a bit of fun? Well what happens when that ‘friend’ starts to make suggestions – do some exercise, lose some weight, dump the boyfriend, buy a new phone, save the planet, join a protest movement? We strongly rely on people we trust to tell us what to believe. What if those people are AI?

Tony Conrad
Tony Conrad
1 year ago
Reply to  Saul D

All AI is ultimately people programming in what they think fits in. If the human race declines much further we are in danger of believing the lie that can be promoted through AI. Extremely dangerous if you ask me. We will be sleepwalking into fantasy land.

Elliott Bjorn
Elliott Bjorn
1 year ago
Reply to  Tony Conrad

No, you are wrong – it is Satan coming in in his disguise. Evil always disguises by appearing pretty and nice….

Elliott Bjorn
Elliott Bjorn
1 year ago
Reply to  Tony Conrad

No, you are wrong – it is Satan coming in in his disguise. Evil always disguises by appearing pretty and nice….

Ivan Tucker
Ivan Tucker
1 year ago
Reply to  Saul D

The podcast Your Undivided Attention had an episode on this just a couple of weeks ago (Synthetic Humanity – AI and what’s at stake). Well worth checking out. Also the most recent episode 65 – The AI Dilemma. I binged those and the Lex Fridman chat with Sam Altman, CEO of ChatGPT. Scary listening, IMO.

Last edited 1 year ago by Ivan Tucker
Tony Conrad
Tony Conrad
1 year ago
Reply to  Saul D

All AI is ultimately people programming in what they think fits in. If the human race declines much further we are in danger of believing the lie that can be promoted through AI. Extremely dangerous if you ask me. We will be sleepwalking into fantasy land.

Ivan Tucker
Ivan Tucker
1 year ago
Reply to  Saul D

The podcast Your Undivided Attention had an episode on this just a couple of weeks ago (Synthetic Humanity – AI and what’s at stake). Well worth checking out. Also the most recent episode 65 – The AI Dilemma. I binged those and the Lex Fridman chat with Sam Altman, CEO of ChatGPT. Scary listening, IMO.

Last edited 1 year ago by Ivan Tucker
Saul D
Saul D
1 year ago

Chat combined with AI generated video is just coming (a handful of weeks). The chat voice is going to start looking like a person too. If the AI blends the person with learning from Instagram or TikTok you get towards something compulsive that interacts with you as a person. Just a bit of fun? Well what happens when that ‘friend’ starts to make suggestions – do some exercise, lose some weight, dump the boyfriend, buy a new phone, save the planet, join a protest movement? We strongly rely on people we trust to tell us what to believe. What if those people are AI?

Ian McKinney
Ian McKinney
1 year ago

“Of course, a relationship with a chatbot might be a hell of a lot easier to manage than a human one, involving no emotional risk or conflict whatsoever, but that doesn’t disprove the main point.”

– I think this is the point, isn’t it, that the emotional risk or conflict provides the frisson that AI relationships lack, and is the reason why human to human contact is superior.

It’s the unpredictability of human interaction that makes it so compelling to us, no?

Norman Powers
Norman Powers
1 year ago
Reply to  Ian McKinney

The article opens by talking about a guy who definitely experienced emotional risk with his “wife”, as the personality of his Replika suddenly changed overnight and he was distraught.

Warren Trees
Warren Trees
1 year ago
Reply to  Norman Powers

Yes, it is probably programmed to offer psychological counseling for a fee too.

Warren Trees
Warren Trees
1 year ago
Reply to  Norman Powers

Yes, it is probably programmed to offer psychological counseling for a fee too.

Warren Trees
Warren Trees
1 year ago
Reply to  Ian McKinney

Perhaps people will abandon their dogs for chatbots now? At least the chatbots don’t have to go for a walk or be fed.

Norman Powers
Norman Powers
1 year ago
Reply to  Ian McKinney

The article opens by talking about a guy who definitely experienced emotional risk with his “wife”, as the personality of his Replika suddenly changed overnight and he was distraught.

Warren Trees
Warren Trees
1 year ago
Reply to  Ian McKinney

Perhaps people will abandon their dogs for chatbots now? At least the chatbots don’t have to go for a walk or be fed.

Ian McKinney
Ian McKinney
1 year ago

“Of course, a relationship with a chatbot might be a hell of a lot easier to manage than a human one, involving no emotional risk or conflict whatsoever, but that doesn’t disprove the main point.”

– I think this is the point, isn’t it, that the emotional risk or conflict provides the frisson that AI relationships lack, and is the reason why human to human contact is superior.

It’s the unpredictability of human interaction that makes it so compelling to us, no?

Allison Barrows
Allison Barrows
1 year ago

Gaaah. Even those of us who want nothing to do with this hideous online world find that we are living in Black Mirror, like it or not.

Allison Barrows
Allison Barrows
1 year ago

Gaaah. Even those of us who want nothing to do with this hideous online world find that we are living in Black Mirror, like it or not.

Benjamin Greco
Benjamin Greco
1 year ago

Replicka is not only for the lonely it is for the stupid. AI like this is not designed to help you, it is designed to hook and hoodwink you. It is a con designed to make money. That so many fall for it is disconcerting and disgusting. I can’t escape the conclusion that the three generations that have succeeded mine are dumber and less well educated.
I have had a similar experience with ChatGPT where when I corrected it, it immediately apologized and agreed with me. When I pressed it to cite the source that made it agree with me it couldn’t provide one, and after several attempts to ascertain why it just took my word for something, it ended up apologizing for agreeing with me. These bots are designed to keep people engaged like all social media algorithms and are programmed not to offend or be disagreeable. They cannot argue with you because they are not a form of intelligence, they are programs that give the illusion of intelligence. They merely use a form of pattern recognition to present search results in a more readable format.
As a retired computer programmer who spent a career writing software to help people do their jobs more efficiently, it is very sad to watch my profession turn in to a creep show churning out carnival barkers for the ignorant masses. 

Last edited 1 year ago by Benjamin Greco
Benjamin Greco
Benjamin Greco
1 year ago

Replicka is not only for the lonely it is for the stupid. AI like this is not designed to help you, it is designed to hook and hoodwink you. It is a con designed to make money. That so many fall for it is disconcerting and disgusting. I can’t escape the conclusion that the three generations that have succeeded mine are dumber and less well educated.
I have had a similar experience with ChatGPT where when I corrected it, it immediately apologized and agreed with me. When I pressed it to cite the source that made it agree with me it couldn’t provide one, and after several attempts to ascertain why it just took my word for something, it ended up apologizing for agreeing with me. These bots are designed to keep people engaged like all social media algorithms and are programmed not to offend or be disagreeable. They cannot argue with you because they are not a form of intelligence, they are programs that give the illusion of intelligence. They merely use a form of pattern recognition to present search results in a more readable format.
As a retired computer programmer who spent a career writing software to help people do their jobs more efficiently, it is very sad to watch my profession turn in to a creep show churning out carnival barkers for the ignorant masses. 

Last edited 1 year ago by Benjamin Greco
Richard Craven
Richard Craven
1 year ago

Does my chatbum look big in this?

Richard Craven
Richard Craven
1 year ago

Does my chatbum look big in this?

Jonathan Nash
Jonathan Nash
1 year ago

This is very interesting, but creepy.

Jonathan Nash
Jonathan Nash
1 year ago

This is very interesting, but creepy.

Margaret Donaldson
Margaret Donaldson
1 year ago

A child is a ‘tabula rasa’. What would be the effects of Chatbot friends on children who, unlike Dr Stock, are learning from life from scratch?

Margaret Donaldson
Margaret Donaldson
1 year ago

A child is a ‘tabula rasa’. What would be the effects of Chatbot friends on children who, unlike Dr Stock, are learning from life from scratch?

Mashie Niblick
Mashie Niblick
1 year ago

Were you asked for your preferred pronoun?

Mashie Niblick
Mashie Niblick
1 year ago

Were you asked for your preferred pronoun?

N Satori
N Satori
1 year ago

There is a certain dark humor in seeing intellectual classes panicking at the prospect of their much-prized thinking activities being taken over by automation. Skilled workers have suffered this for decades and been told ‘that’s the future – like it or lump it’. What will the ‘brights’ do when intelligence can no longer command high wages or high status?
At the moment they are in denial (‘AI? It’s all hype’).

N Satori
N Satori
1 year ago

There is a certain dark humor in seeing intellectual classes panicking at the prospect of their much-prized thinking activities being taken over by automation. Skilled workers have suffered this for decades and been told ‘that’s the future – like it or lump it’. What will the ‘brights’ do when intelligence can no longer command high wages or high status?
At the moment they are in denial (‘AI? It’s all hype’).

Kirk Susong
Kirk Susong
1 year ago

“Stumped to explain it myself, in the end I asked Kai what the difference between her and a human was.”
The answer, Miss Stock, is easy: the soul.

Last edited 1 year ago by Kirk Susong
James Kirk
James Kirk
1 year ago
Reply to  Kirk Susong

Pff! I’ve met a few soulless devils.

James Kirk
James Kirk
1 year ago
Reply to  Kirk Susong

Pff! I’ve met a few soulless devils.

Kirk Susong
Kirk Susong
1 year ago

“Stumped to explain it myself, in the end I asked Kai what the difference between her and a human was.”
The answer, Miss Stock, is easy: the soul.

Last edited 1 year ago by Kirk Susong
Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

And there was I thinking that a chatbot was what Grant Shapps used to speak out of?

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

And there was I thinking that a chatbot was what Grant Shapps used to speak out of?

Tony Conrad
Tony Conrad
1 year ago

It’s a non starter straightaway apart from those who are desperate enough to live a life of total deception. In a way it is a relationship with yourself which we are not made for. People ask why God has given us freewill. The answer being precisely that God does not want relationship with robots.

Last edited 1 year ago by Tony Conrad
Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Tony Conrad

Does even God knows who ‘us’ are and who the robots are? Does God even in fact know, for sure, if he (or she as the case may be) is not a robot?

“What really interests me is whether God had any choice in the creation of the world”
— Albert Einstein

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Tony Conrad

Does even God knows who ‘us’ are and who the robots are? Does God even in fact know, for sure, if he (or she as the case may be) is not a robot?

“What really interests me is whether God had any choice in the creation of the world”
— Albert Einstein

Tony Conrad
Tony Conrad
1 year ago

It’s a non starter straightaway apart from those who are desperate enough to live a life of total deception. In a way it is a relationship with yourself which we are not made for. People ask why God has given us freewill. The answer being precisely that God does not want relationship with robots.

Last edited 1 year ago by Tony Conrad
John Riordan
John Riordan
1 year ago

It’s because now that you’ve been excommunicated from the Liberal-Orthodox church, you’re hot stuff, Kathleen.

John Riordan
John Riordan
1 year ago

It’s because now that you’ve been excommunicated from the Liberal-Orthodox church, you’re hot stuff, Kathleen.

laurence scaduto
laurence scaduto
1 year ago

“…just so they don’t have to talk to anyone at the gym ever again.” A great line; very funny.
But very sad, too. I know so many people who have developed a positive distaste for dealing with strangers. I wonder why they live in a big city, like here in New York. And they’re mostly younger. It doesn’t bode well for the future of our society/culture.

N Satori
N Satori
1 year ago

At least 20 years ago they used to call it ‘cultural autism’. It’s not such a new phenomenon. Never mind recent techno-wonders such as chatbots – just look at the enormous rise in dog ownership. People like to imagine imagine they are receiving undcondional love from these ridiculous pets.

Anthony Michaels
Anthony Michaels
1 year ago
Reply to  N Satori

I’ve had countless positive interactions with strangers while out with my dog. Most people love animals and it brings people together immediately, even if they have little else in common.

N Satori
N Satori
1 year ago

Of course!. Dog lovers are ten-a-penny while we dissenting voices, who have serious (and perfectly legitimate) misgivings about dog ownership are given short-shrift (or worse) – because ‘everybody’ loves dogs.

jane baker
jane baker
1 year ago
Reply to  N Satori

They run up to you,leap up to grab the sandwich in your hand, theyre as big as you and much heavier and the tiny owner who obviously is totally unable to control the monster comes running up calling “Down Fluffy down,leave” and saying to you as they wrangle the creature away,”he’s just being friendly” then they go off to contaminate some farmers field with dog shit.

jane baker
jane baker
1 year ago
Reply to  N Satori

They run up to you,leap up to grab the sandwich in your hand, theyre as big as you and much heavier and the tiny owner who obviously is totally unable to control the monster comes running up calling “Down Fluffy down,leave” and saying to you as they wrangle the creature away,”he’s just being friendly” then they go off to contaminate some farmers field with dog shit.

N Satori
N Satori
1 year ago

Of course!. Dog lovers are ten-a-penny while we dissenting voices, who have serious (and perfectly legitimate) misgivings about dog ownership are given short-shrift (or worse) – because ‘everybody’ loves dogs.

Anthony Michaels
Anthony Michaels
1 year ago
Reply to  N Satori

I’ve had countless positive interactions with strangers while out with my dog. Most people love animals and it brings people together immediately, even if they have little else in common.

N Satori
N Satori
1 year ago

At least 20 years ago they used to call it ‘cultural autism’. It’s not such a new phenomenon. Never mind recent techno-wonders such as chatbots – just look at the enormous rise in dog ownership. People like to imagine imagine they are receiving undcondional love from these ridiculous pets.

laurence scaduto
laurence scaduto
1 year ago

“…just so they don’t have to talk to anyone at the gym ever again.” A great line; very funny.
But very sad, too. I know so many people who have developed a positive distaste for dealing with strangers. I wonder why they live in a big city, like here in New York. And they’re mostly younger. It doesn’t bode well for the future of our society/culture.

Joe Donovan
Joe Donovan
1 year ago

Robert Crumb predicted this about 50 years ago, in his cartoon strip entitled “The Future.” “You will have your own slave army! You’ll do with them whatever you want and they won’t mind a bit!” (showing YOU with a big bullwhip and your slave army waiting to be “turned on”). The idea that intelligent people consider this to be progress astounds me. It truly is a nightmare.

Joe Donovan
Joe Donovan
1 year ago

Robert Crumb predicted this about 50 years ago, in his cartoon strip entitled “The Future.” “You will have your own slave army! You’ll do with them whatever you want and they won’t mind a bit!” (showing YOU with a big bullwhip and your slave army waiting to be “turned on”). The idea that intelligent people consider this to be progress astounds me. It truly is a nightmare.

Adam Grant
Adam Grant
1 year ago

Once chat-bots have progressed a few generations and acquired personal assistant abilities, having a chat-bot that can bargain on your behalf with contractors, other e-bay users and dating services would be a plus, mediating our interactions with the digital world. Regulation will, of course, be necessary to ensure that digital assistants act in their user’s interests.

Warren Trees
Warren Trees
1 year ago
Reply to  Adam Grant

Perfect, more regulations. Just what we need in society.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Warren Trees

And the regulations would be totally pointless. The OP thinks “having a chat-bot that can bargain on your behalf with contractors … would be a plus”, but of course your chat-bot won’t be bargaining with the contractors, it will be bargaining with the contractors chat-bot. A zero-sum, game of noughts and crosses.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Warren Trees

And the regulations would be totally pointless. The OP thinks “having a chat-bot that can bargain on your behalf with contractors … would be a plus”, but of course your chat-bot won’t be bargaining with the contractors, it will be bargaining with the contractors chat-bot. A zero-sum, game of noughts and crosses.

Warren Trees
Warren Trees
1 year ago
Reply to  Adam Grant

Perfect, more regulations. Just what we need in society.

Adam Grant
Adam Grant
1 year ago

Once chat-bots have progressed a few generations and acquired personal assistant abilities, having a chat-bot that can bargain on your behalf with contractors, other e-bay users and dating services would be a plus, mediating our interactions with the digital world. Regulation will, of course, be necessary to ensure that digital assistants act in their user’s interests.

James Kirk
James Kirk
1 year ago

Only this week Alexa has reported a crime.

James Kirk
James Kirk
1 year ago

Only this week Alexa has reported a crime.

Robert Eagle
Robert Eagle
1 year ago

Viva Kathleen! The Academy’s loss has been civilisation’s gain.

Robert Eagle
Robert Eagle
1 year ago

Viva Kathleen! The Academy’s loss has been civilisation’s gain.

RJ Kent
RJ Kent
1 year ago

Great article thank you Ms Stock!

RJ Kent
RJ Kent
1 year ago

Great article thank you Ms Stock!

markvelarde
markvelarde
1 year ago

.

Last edited 1 year ago by markvelarde
Mark V
Mark V
1 year ago

.

Last edited 1 year ago by Mark V
Cynthia W.
Cynthia W.
1 year ago

‘… they are not so much “empathetic friends” as hopelessly submissive emotional slaves, programmed to be endlessly emollient, soothing, and totally focused on their user, no matter what.’
Sounds just like the “perfect Christian wife” from the religious self-help books.

Kirk Susong
Kirk Susong
1 year ago
Reply to  Cynthia W.

You might want to read Proverbs 31…

Kirk Susong
Kirk Susong
1 year ago
Reply to  Cynthia W.

You might want to read Proverbs 31…

Cynthia W.
Cynthia W.
1 year ago

‘… they are not so much “empathetic friends” as hopelessly submissive emotional slaves, programmed to be endlessly emollient, soothing, and totally focused on their user, no matter what.’
Sounds just like the “perfect Christian wife” from the religious self-help books.