Mary – I am amazed you left out sex robots. When the internet was just some thing weird nerds did (I was there in the early years, with my Amstrad and 5 1/4” floppy disk, no mouse – but DOS, haha) – the Very First thing to be on the internet was – – pictures of Nude women – It took 15 minutes for the picture to load on your screen – slowly from the top of the head it would appear line by line…… haha…. it was a trip……
Anyway – the postmodernist mission to destroy the family is at about half way now, and so soon it will be time for people to begin making their commitment to their Love Robot. Just a small ceremony on Meta, with their odd friends and their robots…cake, champagne – good stuff – Moet bought as a NFT from the Meta NFT store – the virtual bottle opened and served, and the NTF now just an empty to be thrown away after the guests leave – Maybe a couple of your ‘Furries’ friends will bring their partners – be cool, yet meaningful.
Back at the old Podpartment with the gaming console front and center as it should be – the headsets, the lower set, the different kinds of prescribed gaming drugs, depending on what you plan – the cold Pizza and diet coke if you are a traditionalist, set out ready for break time…..AND your comitted Love Robot/doll all charged up and off you go – that is the future of Robots…. ……
You weren’t raised on Tomorrow’s World: The TV programme that got every one of its predictions about our technological future, utterly and horribly wrong.
Will love robot relationships be polygamous?
At your own risk. Hell hath no fury like a robot scorned.
At your own risk – hell hath no fury like a Japanese Tech CEO scorned, especially one with an Excel spreadsheet tracking every convulsion of your body in real time.
I have noticed on here commenters often bring in postmodernism when it has no relevance, and show no understanding of what it is. What is pushing this relentlessly move towards mechanisation and dehumanisation? Is it woke lefty academics with their ´postmodernismˋ? No I don’t think so, for one thing so–called post–modernism is very much old hat. Think it is more likely to be the unrestrained global capitalism. The reference to Amazon gives that away.
It is not just the factory floor that places human beings at the beck and call of mechanisation. Increasingly we do what the machines tell us – in the car, crossing the road, delivering products, filing forms, fulfilling orders, choosing what to read or watch. We are the robots for the machine.
And if you want to see how spectacularly AI is moving into human fields have a look at generative AI – such as pictures from phrases like DALL-E or StableDiffusion. What happens when AI starts to out-create us?
Exactly. Every government or corporate bureaucracy works tirelessly to effectively mechanize its employees and ensure total compliance with the policy set at the top. Policies become ever more detailed and prescriptive by the day. It’s driven by the need for efficiencies but even more by the need to ensure compliance with law and with expanding ideological behaviour codes. The end-goal is the department of motor vehicles. So never mind the robots attempting to simulate humans, the humans are all being turned to robots. Except for the tiny group of elites who pull the levers. And if anyone finds that distressing, have some more ‘Soma’ (alcohol,opioids, legalized marijuana . . .)
And what should concern us is less whether machines will become sentient (they won’t), as what the effects of ever greater mechanisation will be on humans.
I certainly hope and pray the author is correct, but in all honesty I think the author is on pretty shaky ground when claiming “they won’t”. But it would take a long and complicated essay to show why. The grand-daddy of the argument that human sentience is generated from processes fundamentally different from algorithmic ones, or even any currently understood physics, is our most eminent mathematician/physicist and Nobel prize winner, Sir Roger Penrose. The Penrosian counter arguments would make no sense unless you first grasp of Gödel’s incompleteness theorems, and the nature of the Halting Problem in computing. And people can do no better in this context than head for the beguilingly well written books by Roger Penrose, ‘The Emperor’s New Mind’ and ‘Shadows of the Mind’. Also, more subtly, Penrose is implying a distinction between consciousness and intelligence, and he is only claiming consciousness is not algorithmically explainable, not intelligence, on the contrary he expects machine intelligence to go past humans.
Personally speaking I thought for decades that human sentience was the bulwark against algorithmically generated machine intelligence going past us, but I no longer think that. I don’t think we have any real way to in fact distinguish between human intelligence and machine intelligence, and by extension, we have no way to be able to claim that machine intelligence won’t eventually display every single one of the characteristics of sentience.
I hope you’re wrong but of course you might not be. Penrose’s books are now on my reading list.
However, is “displaying” all of the characteristics of sentience really the same as actually *possessing* sentience, free will, and the inherent, irreducible, inalienable characteristic of being human that is given by what some people call nature and others call god?
There is in fact no way of discerning the difference between ‘displaying’ and ‘possessing’ in this context – not even between you and those closest to you. You (and I and everyone) are taking on trust the sentience of other humans. Your experience is hermetically sealed. We believe other humans when they say ‘Cogito Ergo Sum’, but all we are going by is a nexus of behaviours and responses, we have nothing else. And if those behaviours and responses are identical from machine intelligence, on what basis would you deny the sentience of a machine that claims it is sentient?
That accords with the Turing test of machine intelligence, or the ability of machines to ‘think’. Where we might differ from machine intelligence is our apparent consciousness of ‘feeling’. If a machine expresses sadness, is it ‘sad’ in the same way that we understand sadness? It also begs the question as to whether one person’s understanding of sadness is the same as another’s? Wouldn’t the differing experiences of sad events in the life of each individual human result in a slightly different – or possibly not so slightly – understanding of sadness? And is sadness an expression of the experience of life events, or something less definable? Those individuals more prone to depression might be evidence of that.
All of which then leads to the key issue – how much does our experience of emotion impact upon our ability to think, based upon cognitive function? Before a child understands the basics of language, does it ‘think’ differently from an adult? Chomsky of course argued that humans are born with an inherent sense of the structures of language. And if that is true, what implications does it have for the ability of machines to replicate language skills, when the ability has been programmed? If self-replicating machines arise, will that innate ability be transferred? One would tend to say Yes. Therefore, how do humans differ from machines in that respect?
The biggest issue i think we have is that which Wittgenstein grappled with – the use and limitations of language. How do we know we’re all discussing the same topic? But somehow, we get by, otherwise civilisation wouldn’t arise.
It might be true that there is no way of discerning the difference between display and possession of sentience. And it might be impossible to deny the claim of a machine which claims it is sentient. But even if this is so, the display or even (ostensible) possession of sentience by a human-created machine does not necessarily give it all of the characteristics that humans possess. That is, there could be something humans possess which machines, sentient or not, could never possess. I don’t think you could use conventional logic to prove or to disprove the existence of such a thing. For that, as for all scientific enquiry, you need faith.
“…there could be something humans possess which machines, sentient or not, could never possess…”
Yes, but we are now in the realm of belief, which is fair enough. But it’s not something you can point to.
Agreed. But all science is at its root in the realm of belief. You have to believe that what your eyes, or your instruments, is telling you is true; and that there is some meaningful category as objective truth. Believing that requires as much a leap of faith as believing that all functioning human beings have a sentience that is the same or very similar, or at least comparable, to one’s own.
The way I see it is that the disenchanted collapse in religious belief is part of the same phenomenon as the collapse in belief in objective truth. If reality is exactly identical to your experience of it then you can, if you like, dispense with human beings and replace them with perceived or actual (and digital or real) sentient “others” who, unlike real humans, are willing to bend to your every will. That’s where we may be headed: a narcissist’s fake paradise. And that is, partly, why I choose to believe that there is something irreducibly sacred about the individual human being. Even if that belief is not strictly true, I believe that acting as if it were true, and bearing in mind that that truth can never be proven or disproven, serves a higher moral purpose. That belief is partly founded on human history, which suggests when people stop having regard to each other as sacred others, they can almost always commit what almost any human being would consider to be the most horrendous crimes against each other
Andrew, you just happened to touch the biggest appeal of AI. Once it can emulate the display of sentience, it could be preferred to actual sentience on the grounds that one can control AI. In other words, a human could find in AI an alternative to recurrent frustration experienced in iterations with other humans. The questions of whether we like this development or not is irrelevant, because the baseline levels of frustration for modern humans are driving the yearning for such technology.
The validity of Penrose’s arguments turns on a position one takes on foundations of mathematics. If one is a strict constructivist, they founder.
I am of the view that what Penrose really proved is that mathematical Platonism and strong AI (the view that our minds are equivalent to Turing machines) are incompatible views, rather than that strong AI is false.
Of course, the questions of the why and what of consciouness are very difficult. So difficult that many philosophers of mind (a discipline in which my daughter works) are now embracing at least a moderate version of panpsychism that holds that any information processing carries with it some sort of subjective experience, in which case questions like “What is it like to be a computer (running certain code)?” or even “What is it like to be a thermostat?” become reasonable questions, along with “What is it like to be an octopus?” and (from what we now know about plant behavior) “What is it like to be an oak tree?”
I desperately want to believe in the Penrosian stance, or something similar, but for reasons which I won’t go into at this time, I cannot because I cannot find convincing counters to a number of lines of argument which indicate that sentience is ultimately algorithmic. This is notwithstanding that the consequences of sentience being algorithmic are completely bizarre.
The following passage by Ludwig Wittgenstein which argues against the panpsychism stance, is interesting in this context.
“…Look at a stone and imagine it having sensations. – One says to oneself: How could one so much as get the idea of ascribing a sensation to a thing? One might as well ascribe it to a number! – And now look at a wriggling fly and at once these difficulties vanish and pain seems able to get a foothold here, where before everything was, so to speak, too smooth for it. And so, too, a corpse seems to us quite inaccessible to pain.-Our attitude to what is alive and to what is dead, is not the same. All our reactions are different.- If anyone says: “That cannot simply come from the fact that a living thing moves about in such-and-such a way and a dead one not”, then I want to intimate to him that this is a case of the transition ‘from quantity to quality’…”
That point about attributing sensation to a number is precisely where sentience being algorithmic would take us – one of the bizarre consequences I mentioned. But by no means enough to allow me to discount sentience as an algorithmic process.
This omits one key fact. An AI, no matter how sophisticated, is designed by humans. Every line of code, every algorithm, every microchip, capacitor, and diode came from human hands. It was designed by engineers to be whatever it is and do whatever it does. It is known. Because we fully understand and control all inputs, designers can, and will, deny that robots have sentience no matter how much the AI themselves claim the opposite and that won’t happen too many times before executives and possibly even governments begin insisting on safeguards that insure the robots don’t assert their own sentience. Of course, that won’t stop some people from claiming the robots are sentient for the reasons you mention. Ultimately, we’ll be left with a philosophical argument of unprovable assumptions so much like other arguments regarding consciousness, mind, and soul. There are some questions that empirical reasoning cannot answer.
It is very likely that we will prove that consciousness is just an emergent behaviour of any sufficiently deep pattern matching machine within the next 2-3 decades.
Unfortunately philosophy still roots itself in the naivety of Greek thinking and the idea that some how the world rests on pure logic, and that we are some how the ultimate thinking machine, so must look for the logical basis for everything.
A pattern matcher on the other hand relies on likelihoods. There is no ‘dog’ or ‘cat’, just a likelihood that something is a dog or cat. There is no fundamental essence, but for each instance we can estimate a likelihood based on what is present or absent. We then test that likelihood, and update the model. So when we take a Socrates question like what is ‘justice’ which he attempts to distil to components, the approach fails because our notion of justice is built on likelihoods, models, weights and expectations not definitional logic.
Pattern matchers are relatively easy to build and to define but very hard to disassemble. They occur very naturally as means to improve predictions and heuristics – testing outcome and using this to update the model. Something true is reinforced. Something false is diminished in value. Over time the heuristics resemble best path solutions and logical outcomes. The code for learning is easily defined but the model is like a cloud that is forever updating. Then on top of this we have also learnt the tricks of thinking rationally, but it is hard, and it is not our dominant mode.
As pattern maching scales, a pattern matcher that gets deep starts to match internal patterns, to find deeper connections and from that it starts to find itself within the patterns, and to find itself choosing between different pattern matching preferences. That eventually becomes consciousness – an emergent property of any sufficiently large pattern-matching system.
The experience from the GPT’s indicates precisely that phenomenon of emergent sentience – only I don’t think we have 2 to 3 decades to cogitate on the matter, I think we have less than a decade before GPT-type responses are indistinguishable from human ones.
It doesn’t really matter if machine sentience is “like” human sentience or something altogether different. If it functions similarly and develops a survival instinct, we’re probably toast.
Tell me when they have developed a robot that can play football as well as any Premier League footballer. Chess is child’s play in comparison.
– Orange Catholic Bible
I’m tempted to just write FO but that would get me banned probably.
what a load of head in sand – horse twaddle. It’s not about capabilities of Ai and robots but about the intention and philosophy behind their construction. Man is not good enough, we need something more perfect, with this thing we can achieve utopia – but how does a broken thing make itself perfect. It doesn’t and it’s can’t, it just makes hell on earth. You seem to be convinced that Ai can’t become sentient (whatever that means) aware of itself, it’s identity and its meaning and purpose perhaps.
But it is not the intent of science and nations to build separate entities but rather to integrate the the entities of human, nanotechnology and gene manipulation to make ‘an internet of things’ within which the human individual is just another thing. Except by the process of augmentation, combination and assimilation there will be no individual, just individual things, with no identity, purpose or meaning other than that of the collective.
The problem is that the Collective is not a living thing, but it will be the source of tyranny and totalitarian technocracy, unless you would define a beehive or an ant colony as a living thing.
Perhaps Incan finish with Hobbes
This idea of commonwealth as a collective seems to create a society very similar to that of bees and ants, where the common good differs not from the private or individual good – and being inclined by their individual nature, they thereby procure the common benefit (power for all as all) Since however, men are neither ants nor bees the whole concept is a delusion.
It is this delusion that drives the nutcases of the scientific and technological community to proceed to hell.
“Robot” means work in Slavic languages. “Malenki robot” was the euphemism for deportations from East and Central Europe in the Soviet Union after World War II.
Really interesting article until the final sentences where history was abandoned and left wing oppression of the masses ideology reared its head
So “contactless” travel has made human contact a premium extra — because what people really want is to talk to a human”
That’s probably why we pay a small fortune for various drinks at the local coffee shop.
The very same facts that Ms Harrington reads so ominously can equally and easily be read in a voice of optimism.
It wasn’t until the pump mechanism came into use that Harvey was able to open up the mysterious workings of the human heart. The pump’s dynamic fits into a neat algorithm; the heart reveals deeper mysteries with every new investigation.
As we discover and clarify the limits of mechanisms, we uncover and embrace the endless wonder of living organisms. Subjugate the forced robot to the human being and we will finally begin to appreciate what it truly means to be human.
The author should brush up on the history of Catholic Church since she’s peddling a biased story of a conflict between the church and science – Roman Catholic Church gave more financial and social support to the study of astronomy for over six centuries than any other instiution. Did you know that Copernicus was a Catholic canon? Was it even taught at school?