Meet Replika, your new A.I. best friend
Are automated relationships the grim future for our atomised society?
A new artificial intelligence app is doing the rounds on social media. Named ‘Replika’, the A.I. is described as providing a “space where you can safely share your thoughts, feelings, beliefs, experiences, memories dreams — your ‘private perceptual world’”.
On visiting the site you are greeted with the image of several humanoid faces, smiling invitingly and prompting you to start a conversation. The experience is unnerving, with the A.I. sharing its personal preferences and hobbies, including baking cookies — hard to imagine for a computer programme.
Like what you’re reading? Get the free UnHerd daily email
Already registered? Sign in
Perhaps most interesting about Replika is the levelling-up system, with users being rewarded with digital coins the more they converse with the digital mirror. You are prompted several times to pay (real) money to upgrade your A.I. companion, which unlocks more flirtatious and intimate possibilities. It seems that Replika doesn’t just aim to be a friend simulator, but a full-blown digital girlfriend.
There’s nothing in Replika’s branding that indicates it’s meant to be only a supplement to real human relations: indeed, the site seems to be selling you on the idea that it can replace in-real-life connections altogether, with customer testimonies including descriptions of years-long “relationships” formed with the app.
My intuition is that we’re seeing the first, or at least more mainstream-oriented, seeds sprouting of what will eventually replace para-social relationships with real people. Your “AI girlfriend who won’t judge you” is the para-social relationship made even more consumer-friendly.
Replika is unlikely to be the last company to try and automate human relationships — and now thanks to a year of pandemic-induced lockdowns, people are more isolated than ever. Our social interactions are more confusing than ever, with constantly moving goalposts of what is socially-acceptable behaviour. Is it any surprise some people choose to opt out of society altogether?
We’ve developed an allergy to any friction at all — in the classroom, when we order our takeout, how we interact with everything and everyone from employers to potential dates to transportation to our own homes. Give people an economic motivation to opt for the most frictionless expression of a product or an action or a person available, and life is suddenly easier.
Artificial intelligence, unlike an influencer, an OnlyFans girl, a TikTok star or Twitter personality, is infinitely customisable, reconfigurable, and available. Perhaps future generations will cut out the ‘human’ from human relationships altogether, and save their adoration for artificially perfect technological fictions.
Interesting. This article appears about the same time China announced it’s restricting the amount of time kids can play computer games each day, and it’s also banning ‘effeminate’ men from TV.
I would say the CCP is pushing back against the possibility of raising a generation of young people, especially men, who prefer the internet to reality and who accept non-traditional behavior and roles for males.
Meanwhile, in the west, men are demonized, we are encouraged to believe there is no such thing as gender, and people withdraw to the internet, video games, and, it would seem, increasingly life-like AI.
I’m pretty sure I know who’s going to win this cultural war.
The fight in China isn’t quite what it seems. It is in fact the playout of a power struggle between the CCP and the Chinese tech giants. The CCP was very happy with the leverage and control it’s tech giants have afforded it, both domestically and globally over the last decade. But with that has come over-mighty tech companies and the serious possiblity that the CCP will lose control over them. The CCP is well practiced at social engineering, but slapping down the tech companies is less about cultural control over its youth than about a shot across the bows of the tech CEOs.
“The CCP is well practiced at social engineering, but slapping down the tech companies is less about cultural control over its youth than about”
This is just your guess.
I think anyone normal would find having artificial humans replace real ones in romantic bonds, and as life partners, to be very disturbing and sick. But naturally I can see how such a trend would happen – it would be so easy rather than hard as real life is.
Even the CCP is likely to be very Nationalistic, and believe the Chinese Person is the embodiment of the ideal – and may not want to see their people become so utterly debased as to chose a life with a simulacrum of humanity, rather than a real Chinese, human, partner.
The CCP may be evil, but still not want their people to give in totally to depravity of spirit.
Yes of course it is my guess Sanford – that is the case with all of us shooting the breeze below the line. If I knew rather than guessing I wouldn’t be posting BTL, I would be sipping a tall cool drink by the poolside on my private island in the Caribbean.
As for the what the Chinese will get up to, I will make a bet with you, that they will become the first country to set up large scale zygote banks to get past the huge looming demographic problem they have got of rapidly dropping birth rates and a fast ageing population.
Has something happened in recent times, such that the uncanny valley no longer exists? Are automatons such as the genuinely disturbing Johnny Cab from Total Recall now fine?
The idea of AI friends or even romantic partners leaves me, like the author, utterly perplexed and creeped out. There have been a few pieces on a particular cohort of lonely males of late, but I don’t see this solving their problems.
Life is becoming difficult for cranks like me – I will not speak to a robot voice, so every time I try to conduct business on a phone I have to just keep punching random numbers till finally it gets tired of me and kicks me to a human – which can take a long time, and makes doing business by phone miserable. But then doing business on line can be hard as I do not have a cell phone so cannot take texts, and they very often want to send me one to verify as my IP is always different (VPN) and my computer is wiped clean every time I shut it.
Us Luddites will end up losing against the machine just as the ones fighting the powered looms did…. They are out to force us to heel, and not much can be done.
(I never have owned a cell phone, and will not – I use a computer generated home phone, which works exactly like an old telephone, runs off an old telephone handset, and cannot do texts)
“Are automated relationships the grim future for our atomised society?”
And my point is, if these relationships are (a) an individual choice rather than a compulsion, and (b) you get to the point where you can’t tell the difference between ‘automated’ and ‘real’ relationships, then what possible difference does it make anyway?
(a) I suspect that choice would soon be replaced by compulsion
(b) If you are exercising choice, you will know that it’s automation. You cannot fail to tell the difference if you already know you are talking to a computer animation.
If somebody finds the idea of holding a conversation with a non-sentient object acceptable, perhaps fine; but also perhaps not as human beings are sociable animals who need contact with others. People who completely eschew human contact often have ‘other things going on’, shall we say?
I seriously doubt there ever will be any way of telling apart sentient and non-sentient entities once computing power is great enough. Not even for the entities themselves. *Especially* not for the entities themselves. It requires (a) a watertight mathematical definition of sentience that can get past solipsistic interpretations, (b) discerning the difference between spoofed sentience and sentience – in both directions, ie if someone or something (say an algorithm) says you are *not* sentient, the onus would then be on you to prove otherwise. Since no such process or mechanism exists, there is in fact no real line of demarcation. For example you state: “…If you are exercising choice, you will know that it’s automation…”, but that is mere perception – the “know” bit of that sentence can always be ripped apart. How do you “know”? Did someone or something tell you? In which case how do you “know” they weren’t spoofing you? If you “know” inherently, how do you “know” you weren’t programmed that way by external agency?
“You insist that there is something a machine cannot do. If you tell me precisely what it is a machine cannot do, then I can always make a machine which will do just that” – John von Neumann
AI is not good at abstract thinking?
Also what about enthusiasm or challenging?
Dealing with those in reverse order: enthusiasm or challenging are in fact eminently simulateable – just take a look at any sophisticated video game. As for abstract thinking, well, the entire mathematics of algorithms arose out of attempts to mechanise the creation of mathematical proofs – away from the bolt of lightning in the head of some mathematician or physicist – cf Hilbert’s Entscheidungsproblem. There can be no general method for mechanising proofs as a totality, as Gödel eventually showed. But there are countless specific examples of algorithmic proof making, and in general computers are absolutely superb at abstract reasoning, in fact they were made for precisely that purpose – database engines are.an example as are chess engines. These are ‘narrow’ in scope because the surface they operate over is simplified – for now. However, that is just a matter of enough computing power. Whatever sentience provides for humans is eventually blown out of the water by sheer brute force processing power (fapp) – and in general, most humans most of the time will have no means of discerning the difference between human sentience and algorithms once computing power is great enough – the chess engines prove this point.
I like it.
I think (a) is possible, once it is looked for in biology rather than in some non causal “process” such as IIT, and would then be sufficient to identify human type sentience without (b) if you know how the machine is made. I agree (b) is not sufficient to identify sentience if you are only looking at the inputs and outputs without knowing the causal mechanism, even the very simplistic currently available AI can, at least theoretically, if it is given sufficient data on associations, mimic everything a human can say. That is sufficient for your arguement that it does not matter. When (a) is understood virtual friends will become very sophisticated but will be specialised. You will be able to have virtual friends to amuse and challenge you in different ways. It will add a totally new dimension to mental health and social norms, but when?
My best guess is, as little as a decade away.
Knowledge was defined as justified true belief for about 2.500 years until 1960, when an obscure epistemologist called Edmund Gettier produced a small number of possible cases demonstrating that having the true justified belief that p (‘p’ stands for any given proposition) is insufficient for knowing that p. Since then, philosophers have concentrated their investigations on the necessary and sufficient conditions for the justification of belief.
Reminded me of the old film, Westworld, when the sex worker robot (!) shorts out, circuits to the breeze. Best scene in the flick.
Great post, Katherine. For those who haven’t seen it yet, I highly recommend the movie “Her”, starring Joaquin Phoenix.
Join the discussion
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.Subscribe