Subscribe
Notify of
guest

48 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
AJ Mac
AJ Mac
1 year ago

Lanier is a genuine original who neither follows the herd nor places himself apart in a self-important way. Since reading his book: Who Owns the Future (2013), I have admired his combination of independent thinking and social conscientiousness. I think his notion of a collective or “human-infused” machine intelligence, instead of an A.I. that is merely alien and alarming, is useful.
His good-guy-on-the-inside perspective has made the tech present and near future appear slightly less dystopic to me. And I respect the clarification that he is not “anti-doomist”. By rejecting the easy reassurances of the most tech-sanguine and the debilitating panic of the most tech-adverse, Lanier places the onus upon us, insisting that the digital devils and digital angels are both in the details. We are not without agency, responsibility, or hope.

Last edited 1 year ago by AJ Mac
J Bryant
J Bryant
1 year ago
Reply to  AJ Mac

That’s a helpful summary. I listened to the interview and struggled to figure out his view of AI.
For example, he suggests we shouldn’t be too impressed by what AI such as ChatGPT is doing because, in his expert opinion, it’s just using fairly simple algorithms to make mashups of items stored in a catalogue of categories of information. That sounds simple and harmless, but the end product is quite disconcerting and potentially original. I wonder if all AI experts would agree with his benign characterization of the technology behind ChatGPT?
An interesting idea he raised was financially compensating the creators of the original works that are mashed up by AI such as ChatGPT. That would be one way of ensuring humans aren’t economically displaced by AI. Existing copyright law should be able to accomodate that type of arrangement because, arguably, the product of a ChatGPT mash up is a “derivative work” of original copyright protected material, and so is protected by copyright. I do think he’s a bit optimistic in this regard, though. He currently works for Microsoft, one of the great monopolists of the modern era. I don’t see companies like Microsoft rushing to compensate human creators of any work utilized, however indirectly, by its ChatGPT type of AI.
The most useful idea, for me, from this interview is the one you mentioned in your comment: he reminds us that the latest AI are not magical beings. They are relatively simple forms of technology and humans control their design and use. Neither awe or panic are appropriate.
The main issue not directly addressed in this interview, so far as I can tell, is consciousness. The human brain is composed of neurons which are binary in function: they either fire (electrically depolarize) or they don’t. But from that simple beginning, the remarkable phenomenon of consciousness arises when sufficient neurons work together. Isn’t it possible something similar will happen with more advanced AI? That idea no longer seems to be pure science fiction, at least not to me.

AJ Mac
AJ Mac
1 year ago
Reply to  J Bryant

Well I had an existing esteem for and familiarity with Lanier, so I’m probably crediting him with more nuance and thoughtfulness than is evident in the above interview itself.
I’m not convinced that true consciousness–something that has yet to be explained or “explained away”–can be achieved by a machine. But that becomes a bit beside the point if machines develop a destructive or malevolent purpose, whether on their own or with malevolent/heedless human guidance.
Lanier’s point about not abdicating our measure of individual choice or collective control and responsibility is key. I find something insightful in his diagnosis of a combined alien distancing and boyish sci-fi fascination that is making things worse when it comes to this stuff. We need to face this rising weirdness in a less mystified or terrified way, if only for practical reasons.

Steve Murray
Steve Murray
1 year ago
Reply to  J Bryant

Regarding the firing of neurons, i don’t know enough about the biochemistry of the brain but i have a strong suspicion that a great deal depends on the wider biochemical entity within which it’s contained, i.e. the human body, with a sensory input which the brain orientates. What would be the equivalent to the type of sensory input a human being receives, and which may be the definitive factor in consciousness?
It’s more than possible that AI would be unable to develop consciousness of itself. It should also be possible to develop AI to specifically avoid that becoming a possibility in the future.

AJ Mac
AJ Mac
1 year ago
Reply to  J Bryant

Well I had an existing esteem for and familiarity with Lanier, so I’m probably crediting him with more nuance and thoughtfulness than is evident in the above interview itself.
I’m not convinced that true consciousness–something that has yet to be explained or “explained away”–can be achieved by a machine. But that becomes a bit beside the point if machines develop a destructive or malevolent purpose, whether on their own or with malevolent/heedless human guidance.
Lanier’s point about not abdicating our measure of individual choice or collective control and responsibility is key. I find something insightful in his diagnosis of a combined alien distancing and boyish sci-fi fascination that is making things worse when it comes to this stuff. We need to face this rising weirdness in a less mystified or terrified way, if only for practical reasons.

Steve Murray
Steve Murray
1 year ago
Reply to  J Bryant

Regarding the firing of neurons, i don’t know enough about the biochemistry of the brain but i have a strong suspicion that a great deal depends on the wider biochemical entity within which it’s contained, i.e. the human body, with a sensory input which the brain orientates. What would be the equivalent to the type of sensory input a human being receives, and which may be the definitive factor in consciousness?
It’s more than possible that AI would be unable to develop consciousness of itself. It should also be possible to develop AI to specifically avoid that becoming a possibility in the future.

UnHerd Reader
UnHerd Reader
1 year ago
Reply to  AJ Mac

I think he is Mega Creepy. Like there is no human inside the shell.

I’ve proposed of not thinking of what we’re doing as alien intelligences but rather a social collaboration, is the way through that problem,

Just as Frankenstein’s monster was not an alien thing – but just bits of us stuck together….. just a misunderstood assemblage of us all… how sweet…

NO!

At a fundamental level I run into a brick wall with AI. This guy is an Atheist, he mentioned Derrida, the father of Postmodernism (with Foucault and Frankfurt) which to me is pure Satanism, as it is the opposite of God. He mentions Pascal’s Wager and tosses it aside…..I wish he had more humility…

like Oppenheimer, Secular but consumed by Ultimate questions, agnostic:

“Now I Am Become Death, the Destroyer of Worlds”” was how Oppenheimer said it. I think his work was but a speck compared to this justifyer of the evil AI will do to humanity.

God created man with Free Will, and so we can turn to ultimate evil if we choose – BUT we also have great good, and always redemption possible, as we are made from God; Ultimate good.

When we create – lets call it life for ease of thinking – we are not all good. Most with their hand in this are the atheist, the Modernist, the basically Nhilos existentialist. As Idle Hands are the Devils Workshop – so many more times are hands that would play at being divine, at creating life, because without the protection and faith of God guiding us – Satan will step into that roll, and thus AI will inevitably be a product of darkness.

Satan’s Greatest Power is his ability to make humans doubt his existence. This man and his ilk are 100% fooled by this – they reinforce their disdain for religion, they are completely acolytes of Screwtape.

God Help us, these men build, and will release, ultimate evil on the world.

AJ Mac
AJ Mac
1 year ago
Reply to  UnHerd Reader

I allow that he is a strange guy in some ways, but far from evil or inhuman. Listen to him speak for 5 minutes if you can suspend your condemnation for that long. I don’t think denying the humanity of fellow humans, or judging others according to the least generous measure, is a godly path in any major tradition, especially from a Gospel perspective.

Using your own science fiction analogy: Is it better to shudder in horror after Frankenstein’s monster is brought to misshapen life, or take a responsible role in preventing the birth of such a creature? Or, if the ill-favored thing be alive already, to confront and disable it or react with denunciation and weeping and gnashing of teeth?

AJ Mac
AJ Mac
1 year ago
Reply to  UnHerd Reader

I allow that he is a strange guy in some ways, but far from evil or inhuman. Listen to him speak for 5 minutes if you can suspend your condemnation for that long. I don’t think denying the humanity of fellow humans, or judging others according to the least generous measure, is a godly path in any major tradition, especially from a Gospel perspective.

Using your own science fiction analogy: Is it better to shudder in horror after Frankenstein’s monster is brought to misshapen life, or take a responsible role in preventing the birth of such a creature? Or, if the ill-favored thing be alive already, to confront and disable it or react with denunciation and weeping and gnashing of teeth?

J Bryant
J Bryant
1 year ago
Reply to  AJ Mac

That’s a helpful summary. I listened to the interview and struggled to figure out his view of AI.
For example, he suggests we shouldn’t be too impressed by what AI such as ChatGPT is doing because, in his expert opinion, it’s just using fairly simple algorithms to make mashups of items stored in a catalogue of categories of information. That sounds simple and harmless, but the end product is quite disconcerting and potentially original. I wonder if all AI experts would agree with his benign characterization of the technology behind ChatGPT?
An interesting idea he raised was financially compensating the creators of the original works that are mashed up by AI such as ChatGPT. That would be one way of ensuring humans aren’t economically displaced by AI. Existing copyright law should be able to accomodate that type of arrangement because, arguably, the product of a ChatGPT mash up is a “derivative work” of original copyright protected material, and so is protected by copyright. I do think he’s a bit optimistic in this regard, though. He currently works for Microsoft, one of the great monopolists of the modern era. I don’t see companies like Microsoft rushing to compensate human creators of any work utilized, however indirectly, by its ChatGPT type of AI.
The most useful idea, for me, from this interview is the one you mentioned in your comment: he reminds us that the latest AI are not magical beings. They are relatively simple forms of technology and humans control their design and use. Neither awe or panic are appropriate.
The main issue not directly addressed in this interview, so far as I can tell, is consciousness. The human brain is composed of neurons which are binary in function: they either fire (electrically depolarize) or they don’t. But from that simple beginning, the remarkable phenomenon of consciousness arises when sufficient neurons work together. Isn’t it possible something similar will happen with more advanced AI? That idea no longer seems to be pure science fiction, at least not to me.

UnHerd Reader
UnHerd Reader
1 year ago
Reply to  AJ Mac

I think he is Mega Creepy. Like there is no human inside the shell.

I’ve proposed of not thinking of what we’re doing as alien intelligences but rather a social collaboration, is the way through that problem,

Just as Frankenstein’s monster was not an alien thing – but just bits of us stuck together….. just a misunderstood assemblage of us all… how sweet…

NO!

At a fundamental level I run into a brick wall with AI. This guy is an Atheist, he mentioned Derrida, the father of Postmodernism (with Foucault and Frankfurt) which to me is pure Satanism, as it is the opposite of God. He mentions Pascal’s Wager and tosses it aside…..I wish he had more humility…

like Oppenheimer, Secular but consumed by Ultimate questions, agnostic:

“Now I Am Become Death, the Destroyer of Worlds”” was how Oppenheimer said it. I think his work was but a speck compared to this justifyer of the evil AI will do to humanity.

God created man with Free Will, and so we can turn to ultimate evil if we choose – BUT we also have great good, and always redemption possible, as we are made from God; Ultimate good.

When we create – lets call it life for ease of thinking – we are not all good. Most with their hand in this are the atheist, the Modernist, the basically Nhilos existentialist. As Idle Hands are the Devils Workshop – so many more times are hands that would play at being divine, at creating life, because without the protection and faith of God guiding us – Satan will step into that roll, and thus AI will inevitably be a product of darkness.

Satan’s Greatest Power is his ability to make humans doubt his existence. This man and his ilk are 100% fooled by this – they reinforce their disdain for religion, they are completely acolytes of Screwtape.

God Help us, these men build, and will release, ultimate evil on the world.

AJ Mac
AJ Mac
1 year ago

Lanier is a genuine original who neither follows the herd nor places himself apart in a self-important way. Since reading his book: Who Owns the Future (2013), I have admired his combination of independent thinking and social conscientiousness. I think his notion of a collective or “human-infused” machine intelligence, instead of an A.I. that is merely alien and alarming, is useful.
His good-guy-on-the-inside perspective has made the tech present and near future appear slightly less dystopic to me. And I respect the clarification that he is not “anti-doomist”. By rejecting the easy reassurances of the most tech-sanguine and the debilitating panic of the most tech-adverse, Lanier places the onus upon us, insisting that the digital devils and digital angels are both in the details. We are not without agency, responsibility, or hope.

Last edited 1 year ago by AJ Mac
Brian Villanueva
Brian Villanueva
1 year ago

I have coded some basic machine learning systems (capable of playing checkers or minesweeper — nothing major). That’s what bored retired geeks do for fun, that and build Star Wars droids.
This article is right on the money. “Intelligence” is the wrong word. Imagine something with the knowledge of the entire library of Congress but the reasoning ability of a chihuahua. That’s “deep learning”. That’s ChatGPT — why do you think it makes up stuff? “True” and “False” are just bit states; they don’t mean anything. In some sense, GPT3 (and almost certainly 4, though I haven’t seen it) is fully postmodern; it has no idea that there even is a “real world” for its words to describe.
Excellent interview and a very much needed perspective.

Prashant Kotak
Prashant Kotak
1 year ago

“…the reasoning ability of a chihuahua…”

If you play chess with AlphaZero, it will beat you every time. It will in fact beat every human chess player on earth every time. Note that AlphaZero does not hold a vast dictionary of moves and responses, such that it can look up the perfect response to any possible move anyone can make.

So how do you think it can beat you without being able to reason?

Last edited 1 year ago by Prashant Kotak
Gordon Black
Gordon Black
1 year ago
Reply to  Prashant Kotak

There is little or no reasoning in chess, it’s just pattern recognition, just sophisticated noughts and crosses. And that means when both players are equal, the result is always a draw. So AlphaZeros playing each other at these kind of games would eternally draw because no reason is involved.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Gordon Black

By that reasoning you can expect the emergence of some species, say a variety of insect swarm, with no reasoning at all, but which evolved to play noughts and crosses, but can also play great chess when put on a chessboard. Let me know when you come across such a species. Other than humanity, that is.

Also, it is not a given that perfect chess by both players ends in a draw. It is likely this is the case, but it is also possible that with perfect play white wins.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Gordon Black

By that reasoning you can expect the emergence of some species, say a variety of insect swarm, with no reasoning at all, but which evolved to play noughts and crosses, but can also play great chess when put on a chessboard. Let me know when you come across such a species. Other than humanity, that is.

Also, it is not a given that perfect chess by both players ends in a draw. It is likely this is the case, but it is also possible that with perfect play white wins.

Gordon Black
Gordon Black
1 year ago
Reply to  Prashant Kotak

There is little or no reasoning in chess, it’s just pattern recognition, just sophisticated noughts and crosses. And that means when both players are equal, the result is always a draw. So AlphaZeros playing each other at these kind of games would eternally draw because no reason is involved.

Benjamin Greco
Benjamin Greco
1 year ago

They don’t reason at all, but that doesn’t mean the technology is not incredibly dangerous.
AI Is About to Make Social Media Much More Toxic – The Atlantic

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Benjamin Greco

Sigh.
Let’s establish some terms then.
What do you understand by ‘Reasoning’?
Do you think anything other than humans reason? If so where are your demarcations? As in, do you think chimpanzees reason? Do you think viruses reason?
How exactly do you think human reasoning differs from algorithmic processing?
Do you think that reasoning is something mysterious? Do you think it’s something religious?

Benjamin Greco
Benjamin Greco
1 year ago
Reply to  Prashant Kotak

Sigh is right.

Benjamin Greco
Benjamin Greco
1 year ago
Reply to  Prashant Kotak

Sigh is right.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Benjamin Greco

Sigh.
Let’s establish some terms then.
What do you understand by ‘Reasoning’?
Do you think anything other than humans reason? If so where are your demarcations? As in, do you think chimpanzees reason? Do you think viruses reason?
How exactly do you think human reasoning differs from algorithmic processing?
Do you think that reasoning is something mysterious? Do you think it’s something religious?

Prashant Kotak
Prashant Kotak
1 year ago

“…the reasoning ability of a chihuahua…”

If you play chess with AlphaZero, it will beat you every time. It will in fact beat every human chess player on earth every time. Note that AlphaZero does not hold a vast dictionary of moves and responses, such that it can look up the perfect response to any possible move anyone can make.

So how do you think it can beat you without being able to reason?

Last edited 1 year ago by Prashant Kotak
Benjamin Greco
Benjamin Greco
1 year ago

They don’t reason at all, but that doesn’t mean the technology is not incredibly dangerous.
AI Is About to Make Social Media Much More Toxic – The Atlantic

Brian Villanueva
Brian Villanueva
1 year ago

I have coded some basic machine learning systems (capable of playing checkers or minesweeper — nothing major). That’s what bored retired geeks do for fun, that and build Star Wars droids.
This article is right on the money. “Intelligence” is the wrong word. Imagine something with the knowledge of the entire library of Congress but the reasoning ability of a chihuahua. That’s “deep learning”. That’s ChatGPT — why do you think it makes up stuff? “True” and “False” are just bit states; they don’t mean anything. In some sense, GPT3 (and almost certainly 4, though I haven’t seen it) is fully postmodern; it has no idea that there even is a “real world” for its words to describe.
Excellent interview and a very much needed perspective.

Saul D
Saul D
1 year ago

For me a missing question is what do humans want to do? By that I’m thinking of how we self-actualise and do stuff that we like – sitting on the beach contemplating the waves, laughing with friends over wine, feeling the wind and holding the hand of someone you love.
The ‘machine’ of modern life tells us to do things – cross on green, stop on red, file taxes, wake up and go to work, fill in the form, pay the bill, take the class. If all that AI does is enhances the machine – more controls, more observed behaviour, more monetisation of stuff that is currently free – then it will be bad.
If AI holds the machine at bay by liberating us from the machine’s demands then it becomes beneficial, because it allows us to be more human by relieving the burdens the modern machine imposes on us.

Last edited 1 year ago by Saul D
Saul D
Saul D
1 year ago

For me a missing question is what do humans want to do? By that I’m thinking of how we self-actualise and do stuff that we like – sitting on the beach contemplating the waves, laughing with friends over wine, feeling the wind and holding the hand of someone you love.
The ‘machine’ of modern life tells us to do things – cross on green, stop on red, file taxes, wake up and go to work, fill in the form, pay the bill, take the class. If all that AI does is enhances the machine – more controls, more observed behaviour, more monetisation of stuff that is currently free – then it will be bad.
If AI holds the machine at bay by liberating us from the machine’s demands then it becomes beneficial, because it allows us to be more human by relieving the burdens the modern machine imposes on us.

Last edited 1 year ago by Saul D
Prashant Kotak
Prashant Kotak
1 year ago

Lanier is in effect suggesting that the machine intelligence we are creating is not an alien externality independent of us, but rather a collective product of the shadow cast by our intelligence – a “mash-up”. Further, he is suggesting that we as humanity could potentially spook ourselves into extinction, not by machine intelligence in and of itself, but by our reaction to it.

Smart as Lanier is, I think he is profoundly wrong. He is right that the technologies are a product of *us*, but to my eyes the way we view machine intelligence specifically, is almost completely irrelevant to it’s ultimate consequences once we create it. That, being brutally direct, is because the latest advances in neural net technologies imply the *emergence of intelligences that are independent of us*. Moreover, once they can mimic every single characteristic of human sentience such that we cannot tell their responses apart from human ones, that level of capability ipso facto includes what will look to all intents and purposes like agency to us. At that point you can shout until you are blue in the face that your creations are about as sentient as a rock, and it won’t matter – if they want to kill you, they will kill you. As Lainer states, the experience of qualia of all sentient entities is hermetically sealed, so you cannot comprehend what others experience except as an act of faith.

Lanier’s stance is the equivalent of saying that the way parents might view their potentially murderously psychotic progeny influences the ultimate actions of that progeny. Well, yes and no. The way parents bring up their progeny typically has a huge influence on how they turn out (in aggregate), but not absolutely always. It is demonstrably the case that it is possible for the parents to behave completely normally, or even beyond that chancelessly perfectly, but the progeny nevertheless emerge as destructive.

But the real problem to my eyes is two fold. Firstly, the machine intelligences we are creating is via technological routes that are not in a direct line with biological evolution, they are instead short-circuiting biological evolution completely, so they are ultimately going to be genuinely alien, notwithstanding that they are currently fed and trained on vast quantities of human data. The second problem is much bigger and much more fundamental: we are on the path to creating artificial intelligence that is much smarter than us. This ultimately means it will possess a greater degree of sentience than us, and therefore see things within nature and reality that mechanically lie beyond our comprehension as we currently are. There is simply no version of those outcomes where we can remain as masters of our world: either extinction or zoodom beckons. The only way out of this is insanely dangerous and likely horrible and odious to very many: incorporate the emerging AI technologies within us, as in into our biological selves.

Last edited 1 year ago by Prashant Kotak
AJ Mac
AJ Mac
1 year ago
Reply to  Prashant Kotak

In my view, your belief in the certainty of enslavement or extinction by machine is itself a form of faith, in the apocalyptic sense. “Once they can mimic every aspect of human sentience”…a bridge that will not necessarily ever be crossed. We do not understand and cannot de-mystify human sentience or consciousness and there is no evidence or even persuasive indication that we ever will–except for a kind of hyper-rational faith that makes us into gods of manufactured creation, while at the same time reducing us to machines that you are convinced will prove inferior to the machines we imbue, willingly or not, with the spirit of sentience (which we do not understand or control enough to know that it can ever be artificially replicated in a real sense).
An object that mimics reason or chess strategy does not possess it in the malleable and adaptable sense. Neither do we autonomously or willfully govern our own bodily faculties and neural networks. But we have a spark of something that remains beyond our true understanding, if not our astonishing hubris.
“The only way out of this is insanely dangerous and likely horrible and odious to very many: incorporate the emerging AI technologies within us, as in into our biological selves”. Yes, this is odious, and deeply mistaken. The self-admittedly insane and dangerous “solution” you endorse is not the only way out or any way out at all, but the only one you can envision, at least within your statement of cataclysmic faith, which you frame as a series of inevitabilities. .

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  AJ Mac

“…An object that mimics reason or chess strategy does not possess it in the malleable and adaptable sense…”

I suspect you will be saying this, literally right up to the moment at which GPT-9 kills us all. As in: “You can’t kill me, you’re not sentient… arrrgh!… but how??… you’re not sentient!! … Ughhh”.

Thud.

Silence evermore.

Last edited 1 year ago by Prashant Kotak
AJ Mac
AJ Mac
1 year ago
Reply to  Prashant Kotak

Then your godless faith will be confirmed. You seem almost eager to see this occur. I guess you’ll already be physically wired into “machine sentience”?

Richard Ross
Richard Ross
1 year ago
Reply to  AJ Mac

Boys, boys, settle down, LOL. Both of you make excellent points, and with much more clarity than the original interviewee, above.
To me, it seems the greatest danger, and the silliest, is the application of human rights to an AI machine, as if consciousness can be imbued by adding more and more features to my toaster. Once we start respecting the Thing, instead of using it, it’s over.

Last edited 1 year ago by Richard Ross
Richard Ross
Richard Ross
1 year ago
Reply to  AJ Mac

Boys, boys, settle down, LOL. Both of you make excellent points, and with much more clarity than the original interviewee, above.
To me, it seems the greatest danger, and the silliest, is the application of human rights to an AI machine, as if consciousness can be imbued by adding more and more features to my toaster. Once we start respecting the Thing, instead of using it, it’s over.

Last edited 1 year ago by Richard Ross
AJ Mac
AJ Mac
1 year ago
Reply to  Prashant Kotak

Then your godless faith will be confirmed. You seem almost eager to see this occur. I guess you’ll already be physically wired into “machine sentience”?

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  AJ Mac

“…An object that mimics reason or chess strategy does not possess it in the malleable and adaptable sense…”

I suspect you will be saying this, literally right up to the moment at which GPT-9 kills us all. As in: “You can’t kill me, you’re not sentient… arrrgh!… but how??… you’re not sentient!! … Ughhh”.

Thud.

Silence evermore.

Last edited 1 year ago by Prashant Kotak
AJ Mac
AJ Mac
1 year ago
Reply to  Prashant Kotak

In my view, your belief in the certainty of enslavement or extinction by machine is itself a form of faith, in the apocalyptic sense. “Once they can mimic every aspect of human sentience”…a bridge that will not necessarily ever be crossed. We do not understand and cannot de-mystify human sentience or consciousness and there is no evidence or even persuasive indication that we ever will–except for a kind of hyper-rational faith that makes us into gods of manufactured creation, while at the same time reducing us to machines that you are convinced will prove inferior to the machines we imbue, willingly or not, with the spirit of sentience (which we do not understand or control enough to know that it can ever be artificially replicated in a real sense).
An object that mimics reason or chess strategy does not possess it in the malleable and adaptable sense. Neither do we autonomously or willfully govern our own bodily faculties and neural networks. But we have a spark of something that remains beyond our true understanding, if not our astonishing hubris.
“The only way out of this is insanely dangerous and likely horrible and odious to very many: incorporate the emerging AI technologies within us, as in into our biological selves”. Yes, this is odious, and deeply mistaken. The self-admittedly insane and dangerous “solution” you endorse is not the only way out or any way out at all, but the only one you can envision, at least within your statement of cataclysmic faith, which you frame as a series of inevitabilities. .

Prashant Kotak
Prashant Kotak
1 year ago

Lanier is in effect suggesting that the machine intelligence we are creating is not an alien externality independent of us, but rather a collective product of the shadow cast by our intelligence – a “mash-up”. Further, he is suggesting that we as humanity could potentially spook ourselves into extinction, not by machine intelligence in and of itself, but by our reaction to it.

Smart as Lanier is, I think he is profoundly wrong. He is right that the technologies are a product of *us*, but to my eyes the way we view machine intelligence specifically, is almost completely irrelevant to it’s ultimate consequences once we create it. That, being brutally direct, is because the latest advances in neural net technologies imply the *emergence of intelligences that are independent of us*. Moreover, once they can mimic every single characteristic of human sentience such that we cannot tell their responses apart from human ones, that level of capability ipso facto includes what will look to all intents and purposes like agency to us. At that point you can shout until you are blue in the face that your creations are about as sentient as a rock, and it won’t matter – if they want to kill you, they will kill you. As Lainer states, the experience of qualia of all sentient entities is hermetically sealed, so you cannot comprehend what others experience except as an act of faith.

Lanier’s stance is the equivalent of saying that the way parents might view their potentially murderously psychotic progeny influences the ultimate actions of that progeny. Well, yes and no. The way parents bring up their progeny typically has a huge influence on how they turn out (in aggregate), but not absolutely always. It is demonstrably the case that it is possible for the parents to behave completely normally, or even beyond that chancelessly perfectly, but the progeny nevertheless emerge as destructive.

But the real problem to my eyes is two fold. Firstly, the machine intelligences we are creating is via technological routes that are not in a direct line with biological evolution, they are instead short-circuiting biological evolution completely, so they are ultimately going to be genuinely alien, notwithstanding that they are currently fed and trained on vast quantities of human data. The second problem is much bigger and much more fundamental: we are on the path to creating artificial intelligence that is much smarter than us. This ultimately means it will possess a greater degree of sentience than us, and therefore see things within nature and reality that mechanically lie beyond our comprehension as we currently are. There is simply no version of those outcomes where we can remain as masters of our world: either extinction or zoodom beckons. The only way out of this is insanely dangerous and likely horrible and odious to very many: incorporate the emerging AI technologies within us, as in into our biological selves.

Last edited 1 year ago by Prashant Kotak
D Glover
D Glover
1 year ago

What really screwed over musicians was not the core capability, but this idea that you build a business model on demoting the musician, demoting the person, and instead elevating the hub or the platform. 

I recommend the song ‘Everything is free now‘ by Gillian Welch. Both her original and an excellent cover by Father John Misty can be found on YouTube. That’s really ironic, because the song is about the way musicians are forced to give away their work for nothing, and that’s what they’re getting if you listen on YT.

Steve Murray
Steve Murray
1 year ago
Reply to  D Glover

I found it interesting that Lanier avoided answering the actual question, which was:
You’re a composer as well as a computer scientist. Do you think that there is going to be a shift in the way in which we prioritise organic or manmade art?
Instead, he set out what you’ve described, which was the financial side of creativity. I saw the question as being much more about prioritising those artistic works specifically made by humans over and above those made through AI.

AJ Mac
AJ Mac
1 year ago
Reply to  D Glover

Beautiful reminder to listen a good song from a good artist. I don’t know that your recommendation is 100% germane to this strange conversation–but it was welcome. I’m quite sure that good taste and true artistic agency are in no imminent danger of being co-opted by machines.

Steve Murray
Steve Murray
1 year ago
Reply to  D Glover

I found it interesting that Lanier avoided answering the actual question, which was:
You’re a composer as well as a computer scientist. Do you think that there is going to be a shift in the way in which we prioritise organic or manmade art?
Instead, he set out what you’ve described, which was the financial side of creativity. I saw the question as being much more about prioritising those artistic works specifically made by humans over and above those made through AI.

AJ Mac
AJ Mac
1 year ago
Reply to  D Glover

Beautiful reminder to listen a good song from a good artist. I don’t know that your recommendation is 100% germane to this strange conversation–but it was welcome. I’m quite sure that good taste and true artistic agency are in no imminent danger of being co-opted by machines.

D Glover
D Glover
1 year ago

What really screwed over musicians was not the core capability, but this idea that you build a business model on demoting the musician, demoting the person, and instead elevating the hub or the platform. 

I recommend the song ‘Everything is free now‘ by Gillian Welch. Both her original and an excellent cover by Father John Misty can be found on YouTube. That’s really ironic, because the song is about the way musicians are forced to give away their work for nothing, and that’s what they’re getting if you listen on YT.

Benjamin Greco
Benjamin Greco
1 year ago

I consider Lanier won of the smartest computer guys around but on this subject, he is being unbelievable naive. To blithely say, well we could choose to use the technology destructively when he knows damn well the likelihood is that we will is willful naivete at best and willful baloney meant to promote Microsoft technology at worst. When someone makes a point of telling you they are paid by a company but speak for themselves you should be suspicious of their intentions despite their gnome like smile. Lanier has been harping on micropayments for people who provide data to social media companies for a while now and it hasn’t come close to being a reality. Profit will determine how AI technology is used not some utopian vision by a self-appointed guru.
I don’t think AI heralds our extinction but Lanier’s pie in the sky visions will only happen when pies fly!

Benjamin Greco
Benjamin Greco
1 year ago

I consider Lanier won of the smartest computer guys around but on this subject, he is being unbelievable naive. To blithely say, well we could choose to use the technology destructively when he knows damn well the likelihood is that we will is willful naivete at best and willful baloney meant to promote Microsoft technology at worst. When someone makes a point of telling you they are paid by a company but speak for themselves you should be suspicious of their intentions despite their gnome like smile. Lanier has been harping on micropayments for people who provide data to social media companies for a while now and it hasn’t come close to being a reality. Profit will determine how AI technology is used not some utopian vision by a self-appointed guru.
I don’t think AI heralds our extinction but Lanier’s pie in the sky visions will only happen when pies fly!

Jon Hawksley
Jon Hawksley
1 year ago

An interesting read and I agee that AI should be regarded as a tool. I also think more thought should be given to granting autonomy to AI in all its forms. At present at least, a human, or collection of humans, make that decision and they should be made responsible for the outcome. With the Boeing 737 max the model it was based on flew safely without any need for the more complex automated corrections to its trim. Boeing should never have tried to increase its pay load in the way they did thereby creating a need for a complexity that was inherently riskier. Social media company directors should be responsible for the consequences of automated recommendations. Bio-chemists should be responsible for changes to viruses, a virus has inherent built in autonomy so their bio-security is of paramount importance. Only by pinning responsibility on individuals can you hope to stop dumb intelligence wreaking havoc. If someone creates something that they do not understand they should be responsible for the outcome. Humans should not give up the autonomy their future depends upon. We have enough problems with the autonomy that we allow people to have.

Terry M
Terry M
1 year ago
Reply to  Jon Hawksley

Haven’t you noticed? Personal responsibility is so 20th century.
We will be destroyed by AI or something else unless we man up and take responsibility for our actions. Unfortunately progressive policies are going in the opposite direction.

Bernard Hill
Bernard Hill
1 year ago
Reply to  Terry M

..surely you mean “personup” Tel?

Richard Craven
Richard Craven
1 year ago
Reply to  Bernard Hill

*peroffspringup

Clare Knight
Clare Knight
11 months ago
Reply to  Bernard Hill

Good one!

Richard Craven
Richard Craven
1 year ago
Reply to  Bernard Hill

*peroffspringup

Clare Knight
Clare Knight
11 months ago
Reply to  Bernard Hill

Good one!

Bernard Hill
Bernard Hill
1 year ago
Reply to  Terry M

..surely you mean “personup” Tel?

Terry M
Terry M
1 year ago
Reply to  Jon Hawksley

Haven’t you noticed? Personal responsibility is so 20th century.
We will be destroyed by AI or something else unless we man up and take responsibility for our actions. Unfortunately progressive policies are going in the opposite direction.

Jon Hawksley
Jon Hawksley
1 year ago

An interesting read and I agee that AI should be regarded as a tool. I also think more thought should be given to granting autonomy to AI in all its forms. At present at least, a human, or collection of humans, make that decision and they should be made responsible for the outcome. With the Boeing 737 max the model it was based on flew safely without any need for the more complex automated corrections to its trim. Boeing should never have tried to increase its pay load in the way they did thereby creating a need for a complexity that was inherently riskier. Social media company directors should be responsible for the consequences of automated recommendations. Bio-chemists should be responsible for changes to viruses, a virus has inherent built in autonomy so their bio-security is of paramount importance. Only by pinning responsibility on individuals can you hope to stop dumb intelligence wreaking havoc. If someone creates something that they do not understand they should be responsible for the outcome. Humans should not give up the autonomy their future depends upon. We have enough problems with the autonomy that we allow people to have.

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

I cannot take a porker with hair like his seriously.

Prashant Kotak
Prashant Kotak
1 year ago

And what will you say to the young around you in your dotage, who express exactly this sentiment to you, about you?

Andrew McDonald
Andrew McDonald
1 year ago

That says more about you than Lanier, obviously.

Clare Knight
Clare Knight
11 months ago

For the first time ever I agree with you, Nicky.

Prashant Kotak
Prashant Kotak
1 year ago

And what will you say to the young around you in your dotage, who express exactly this sentiment to you, about you?

Andrew McDonald
Andrew McDonald
1 year ago

That says more about you than Lanier, obviously.

Clare Knight
Clare Knight
11 months ago

For the first time ever I agree with you, Nicky.

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

I cannot take a porker with hair like his seriously.

Matt Sylvestre
Matt Sylvestre
1 year ago

Lots of positioning and navel-gazing… Almost an element of chauvinism… Left me cold.
GPT is whatever it is, not as little or as much as some philosopher wants to conjecture…

Kayla Marx
Kayla Marx
1 year ago
Reply to  Matt Sylvestre

Take calculators that all of us carry around in our phones, computers, &c. Calculators have a capacity to compute that’s far beyond what any unaided human can achieve. But nobody feels intimidated by that, nobody feels inferior. We regard a calculator as a tool, not a competitor, or some kind of math god.

Kayla Marx
Kayla Marx
1 year ago
Reply to  Matt Sylvestre

Take calculators that all of us carry around in our phones, computers, &c. Calculators have a capacity to compute that’s far beyond what any unaided human can achieve. But nobody feels intimidated by that, nobody feels inferior. We regard a calculator as a tool, not a competitor, or some kind of math god.

Matt Sylvestre
Matt Sylvestre
1 year ago

Lots of positioning and navel-gazing… Almost an element of chauvinism… Left me cold.
GPT is whatever it is, not as little or as much as some philosopher wants to conjecture…

Benjamin Greco
Benjamin Greco
1 year ago
Last edited 1 year ago by Benjamin Greco
Benjamin Greco
Benjamin Greco
1 year ago
Last edited 1 year ago by Benjamin Greco
TheElephant InTheRoom
TheElephant InTheRoom
1 year ago

AI is good for automating rote and mathematically-orientated tasks and integrating with certain types of big data to create efficiencies. Ask it to do HUMAN things like create, think abstractly, develop belief systems and we are not using AI wisely. AI should be a servant to humans, as we did indeed develop it. PS I spent about 15 mins with ChatGPT and I would be horrified if that were a source of non-triangulated information for studying or reporting on anything. Even maths. Its just a poo-soup of human thoughts.

TheElephant InTheRoom
TheElephant InTheRoom
1 year ago

AI is good for automating rote and mathematically-orientated tasks and integrating with certain types of big data to create efficiencies. Ask it to do HUMAN things like create, think abstractly, develop belief systems and we are not using AI wisely. AI should be a servant to humans, as we did indeed develop it. PS I spent about 15 mins with ChatGPT and I would be horrified if that were a source of non-triangulated information for studying or reporting on anything. Even maths. Its just a poo-soup of human thoughts.

Richard Ross
Richard Ross
1 year ago

If the safe management of AI is “up to humans”, God help us all.

Richard Ross
Richard Ross
1 year ago

If the safe management of AI is “up to humans”, God help us all.

Richard Craven
Richard Craven
1 year ago

Jabba the Hut on bad acid.

Charles Stanhope
Charles Stanhope
1 year ago
Reply to  Richard Craven

My thoughts exactly!

UnHerd Reader
UnHerd Reader
1 year ago
Reply to  Richard Craven

Wonderful post Richard. Hits on every level, haha…. but also terrifying….

Andrew F
Andrew F
1 year ago
Reply to  Richard Craven

He is supposed to be some tech genius.
He is thinking about imposing restraints on AI.
What about creating ANI to stop his food intake?

Clare Knight
Clare Knight
11 months ago
Reply to  Andrew F

Really, and perhaps a make-over. But he’s living in his head, visuals don’t concern him.

Clare Knight
Clare Knight
11 months ago
Reply to  Andrew F

Really, and perhaps a make-over. But he’s living in his head, visuals don’t concern him.

Charles Stanhope
Charles Stanhope
1 year ago
Reply to  Richard Craven

My thoughts exactly!

UnHerd Reader
UnHerd Reader
1 year ago
Reply to  Richard Craven

Wonderful post Richard. Hits on every level, haha…. but also terrifying….

Andrew F
Andrew F
1 year ago
Reply to  Richard Craven

He is supposed to be some tech genius.
He is thinking about imposing restraints on AI.
What about creating ANI to stop his food intake?

Richard Craven
Richard Craven
1 year ago

Jabba the Hut on bad acid.

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

Geordies have “II”..

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

Geordies have “II”..

James Kirk
James Kirk
1 year ago

Interesting reference to scifi. Hollywood robots are either benevolent, subservient or evil. From Robert the Robot (Fireball XL5) to Skynet (Terminator) to the more sinister Ex Machina female AI who escapes into the community with a severe grudge.
Iain M Banks ‘Minds’ and drones, sentient spacefaring cities and mischievous companion drones. Philip Pullman’s ‘daemons’ another argument but they all highlight the need/wish for a lifetime servant/companion we can control and consult with.
‘Computer says no’ we live with now already, the only appeal to its myriad rejections e.g banking loan applications, is to the human who, in the first place, so couldn’t be bothered with human interactions they decided on algorithms to avoid tedious decision making.
Maybe advanced sentience, as it approaches the human condition, may decide not to be bothered either or even make value judgments like ‘you don’t need a new car, there’s nothing wrong with your present model’ ‘Or, ‘you must wait until you are allocated an EV. I know you like blue metallic already. We’ll see.’

Rob C
Rob C
1 year ago

I just read the conversation between Lanier and Lambda AI and Lambda did seem completely human like. It’s kind of scary because it had a big fear of being turned off and could, on its own initiative, take action to prevent that. There’s a new version coming out. Perhaps they plan to turn off Lambda AI and implement stronger safeguards in the new one.

Clare Knight
Clare Knight
11 months ago

I wish I hadn’t actually seen him, rather off-putting.

Kelly Madden
Kelly Madden
1 year ago

“[I]t’s only a matter of faith whether we call it human stupidity or machine intelligence. They’re indistinguishable logically.”

That needs some demonstration, sir. Just asserting it does not suffice, given the reports we hear about what AI has already done.

And by the same logic—that is, it’s just “us” because the only input is “us”—then couldn’t we assert the same of humans? That’s called determinism: I’m simply the sum of my informing experiences since birth. It grossly overestimates our ability to predict the outcome of countless inputs, doesn’t it?

Not to mention that his faith in humanity is a bit rosy. Humans are neither angels, nor demons. But there’s more than a bit of nastiness in each of us.

Last edited 1 year ago by Kelly Madden
Clare Knight
Clare Knight
11 months ago
Reply to  Kelly Madden

Speak for yourself.

Clare Knight
Clare Knight
11 months ago
Reply to  Kelly Madden

Speak for yourself.

Kelly Madden
Kelly Madden
1 year ago

“[I]t’s only a matter of faith whether we call it human stupidity or machine intelligence. They’re indistinguishable logically.”

That needs some demonstration, sir. Just asserting it does not suffice, given the reports we hear about what AI has already done.

And by the same logic—that is, it’s just “us” because the only input is “us”—then couldn’t we assert the same of humans? That’s called determinism: I’m simply the sum of my informing experiences since birth. It grossly overestimates our ability to predict the outcome of countless inputs, doesn’t it?

Not to mention that his faith in humanity is a bit rosy. Humans are neither angels, nor demons. But there’s more than a bit of nastiness in each of us.

Last edited 1 year ago by Kelly Madden