X Close

Jaron Lanier: How humanity can defeat AI The techno-philosopher on the power of faith

The spark of life?(Chelsea Lauren/WireImage)


May 8, 2023   16 mins

Jaron Lanier uniquely straddles the worlds of computer science and philosophy. Born in 1960, he was an academic child prodigy. He enrolled at New Mexico State University aged 13, joined Atari at 23, after which he became a pioneer in the field of virtual reality, developing the first VR headsets and gloves in the Eighties. He has worked at Microsoft since 2006, but has also developed a parallel career as a public intellectual. In recent years, he has emerged as a prominent critic of digital culture and the way social media algorithms aggravate the crudest of human tendencies — his last book was titled Ten Arguments for Deleting Your Social Media Accounts Right Now.

This week, Lanier joined Florence Read to discuss AI, the possibility of machine consciousness, and why he still has faith in humanity. Below is an edited transcript:

Florence Read: You recently wrote an essay for the New Yorker with the seemingly phlegmatic title, “There is no AI”. Does that mean you don’t think recent developments are a problem?

Jaron Lanier: I actually have publicly stated that I think we could use the new technologies (as well as other technologies) to destroy ourselves. My difference with my colleagues is that I think the way we characterise the technology can have an influence on our options and abilities to handle it. And I think treating AI as this new, alien intelligence reduces our choices and has a way of paralysing us. An alternate take is to see it as a new form of social collaboration, where it’s just made of us. It’s a giant mash-up of human expression, opens up channels for addressing issues and makes us more sane and makes us more competent. So I make a pragmatic argument to not think of the new technologies as alien intelligences, but instead as human social collaborations.

FR: But as someone who works at Microsoft, at the heart of the AI revolution, is it not easier for you take the more pragmatic view over the hysterical?

JL: I have a really unusual role in the tech world. It shouldn’t be unusual; I think it should be more common. Essentially, I am speaking my mind honestly, even though I’m on the inside of the castle instead of on the outside throwing stones at the castle. In my opinion, both positions should be well-manned. I don’t think there’s any perfect way to handle anything. One is always somewhat compromised. Microsoft and I have come to an accord, where I have, what you might call, academic freedom. I speak my mind, I speak things as I see them but I also don’t speak for the company. And we make that distinction. It allows me to maintain my public intellectual life but also work inside.

I don’t necessarily find agreement with everybody I work with, nor do I find absolute disagreement. For instance, Sam Altman from OpenAI really liked my New Yorker piece. I don’t think he agrees with it entirely but he said he agrees with it mostly. That’s great. I think having some degree of openness within the big tech companies is a healthy thing. Within Microsoft, there are now a few other figures who at least somewhat speak their minds. I’m hoping that this demonstration that a tech company can be successful while allowing essentially free speech within its research community, can be a precedent that other tech companies follow. I’d like to see Google and Meta and Apple do a little bit more of that.

FR: What’s odd about the AI discussion is that so many of the people working on it — including Sam Altman — are also the ones sharing their deep existential fears about what AI will do to humanity.

JL: I have tried to understand that myself for decades. I think part of it is, we simultaneously live in a science-fiction universe, where we’re living out the science fiction we grew up with. If you grew up on the Terminator movies, and the Matrix movies and Commander Data from Star Trek, naturally what you want to do is realise this idea of AI. It just seems like your destiny. But then another part of you is thinking, “But in most of those stories, with Commander Data being the exception, this was horrible for mankind.” It feels responsible to acknowledge that it could be horrible for mankind, and yet at the same time, you keep on doing it. It’s weird, and I believe the approach that I’ve proposed of not thinking of what we’re doing as alien intelligences but rather a social collaboration, is the way through that problem, because it’s a way of framing it that’s equally valid, but actionable. But within the tech world, giving up those childhood science-fiction fantasies that we grew up with is really hard for people.

FR: Of course, we used to call the internet a “Wild West”, which played into this mythology — as though there’s a cowboy in every one of these computer scientists who wants to find this new frontier.

JL: I think that’s true too. I grew up in rural New Mexico in the Sixties, when it was still not that economically developed. So I actually got to experience a little bit of the tail end of the Wild West. And I can assure you that it was miserable, and it’s not something anybody would want, but the version of it in the movies is very appealing. And it does bring up a sort of a strange gender-identity connection. Recently, I was on a morning TV talk show in the US, and one of the hosts was a woman who said to me, “It just seems like there’s a lot of male fantasy in the AI world. Shouldn’t there be more women AI leaders?” And I said, “Well, there are some spectacular women AI leaders, and actually, there does tend to be some sort of a difference where the women seem to be a little more humanistic.” In the YouTube version of that, they cut out the whole exchange about women, and I called and asked about it and they said, “It just seemed like a niche question, so we cut it out.” And it’s not, it’s a very central question.

FR: I do have a kind of sense that there is something Promethean about it, that for many men, this is the first time they’ve been able to create life from nothing.

JL: When I was a kid, I always used to say AI is really just womb envy. And having had a child and seeing what it’s actually like for a woman to bear child, I no longer have womb envy. I now appreciate it’s actually a rather difficult process for the mother and I didn’t know that when I was a young man. I will say that it’s not just men, but it tends to be men who haven’t had kids yet who might have that desire to create life in the computer.

FR: You distinguish between two types of AI, between this “alien entity” to which many of your colleagues seem to attribute the spark of life, and your version, which is, in fact, just a network of connections between humans.

JL: The term AI is very wiggly, and gets applied to all kinds of things. But usually these days when we talk about AI, we’re talking about these large AI models like the GPT programmes. What they are is giant mash-ups of human creations. If you ask one of these programmes to create you a new image — like, I’d like to see London as if it were a cross between London and Gurwat — it can probably synthesise that. But the way it does so is by using the classifiers that it uses to identify the images that match the components of your request, and mashing them up. Managing the whole scale so it can happen quickly is not so simple, but the basic idea is pretty simple.

Now, I happen to think that’s a great capability with a lot of uses. I love the idea of computers just getting more flexible. It creates the possibility of saying, “Can you reconfigure this computer experience to work for somebody who’s colourblind?” instead of demanding that people conform to computer design. There’s a potential in this flexibility to really improve computation on many levels and make it much better for people. But, if you want to, you can perceive it as a new intelligence. And, to me, if you perceive it as a new intelligence, what you’re really doing is shutting off yourself in order to worship the code, which I think is exactly the wrong thing. It makes you less able to make good decisions.

You’ve probably heard of the Turing test, which was one of the original thought-experiments about artificial intelligence. There’s this idea that if a human judge can’t distinguish whether something came from a person or computer, then we should treat the computer as having equal rights. And the problem with that is that it’s also possible that the judge became stupid. There’s no guarantee that it wasn’t the judge who changed rather than the computer. The problem with treating the output of GPT as if it’s an alien intelligence, which many people enjoy doing, is that you can’t tell whether the humans are letting go of their own standards and becoming stupid to make the machine seem smart.

FR: So we haven’t reached computational consciousness, a computer with sentience?

JL: The sentience of others is always a matter of faith. There’s no way to be certain about whether someone else has interior experience in the way that you do. I presume that you do, but I can’t know. There is a mystical or almost supernatural element in which we have internal experience — or at least I do, but I can’t make you believe I do. You have to just believe on your own that I do. That faith is a very precious thing and there’s no absolute argument that you should or shouldn’t believe that another person has interior experience, or sentience or consciousness, or that a machine does. Faith is not fundamentally rational, but there is a pragmatic argument, as I keep on repeating, to placing your faith in other people instead of machines. If you care about people at all, if you want people to survive, you have to place your faith in the sentience of them instead of in machines as a pragmatic matter, not as a matter of absolute truth.

FR: Is the only distinction between human and machine sentience, then, a faith in the power of the human soul versus the fact that that computer is just amalgamating information?

JL: It’s a matter of faith that has pragmatic implications. Just to say something is a matter of faith doesn’t mean that the choice of faith is entirely arbitrary, because it can be pragmatic as well. So, if not believing in people increases the chance that people will be harmed, I think the same is the case with this technology. Not believing that machines are the same as people increases the chance that people will be harmed. Cumulatively, we should believe in people over computers, but that’s not an absolute argument based on logic or empiricism, which I don’t think is available to us. There’s a bit of a skyhook thing here, like the problem of “why should you stay alive instead of committing suicide?” It’s applied to the whole species: “Why should we continue this human project; why does it matter?”

I’ve come to something that’s a little bit like the argument attributed to Pascal — you might as well believe in God, just in case it’s real and there’s heaven and hell. I don’t buy that particular argument; I’m not concerned about heaven or hell. However, I do think that the continuation of us in this timeline, in this world, and this physicality, is something I’d like to commit to. I think we might be something special. And so in that way, I’d like to apply faith to us and give us a chance, and that does involve the demoting of computers. But when we demote computers, we can use them better. Demoting AI allows us to not mystify, and that allows us paths to explaining it, to controlling it, to understanding it, to using it as a scientific exploration of what language is. There’s so many reasons to demote it that are practical, that the faith in it as a mystical being just actually seems kind of stupid and wasteful and pathetic to me.

FR: But can we demote something that has potentially more power than us already? Most of us are already subordinated to computers in our everyday lives.

JL: People are capable of being self-destructive, idiotic, wasteful and ridiculous, with or without computers. However, we can do it a little more efficiently with computers because we can do anything a little more efficiently with computers. I’ve been very publicly concerned about the dehumanising elements of social media algorithms. The algorithms on social media that have caused elevated outbreaks of things that always existed in humanity, but there’s just a little more: vanity, paranoia, irritability. And that increment is enough to change politics, to change mental health, especially in impoverished circumstances around the world. It’s just made the world worse incrementally. The algorithms on social media are really dumbass-simple — there’s really not a lot there. And so I think your framing of it as more powerful than us is incorrect. I think it’s really just dumb stuff. It’s up to us to decide how it fits into human society.

The capacity for human stupidity is great, and, as I keep on saying, it’s only a matter of faith whether we call it human stupidity or machine intelligence. They’re indistinguishable logically. So I think the threat is real. I’m not anti-doomist. I just ask us to consider: what is the way of thinking that improves our abilities and improves our thinking — that gives us more options, gives us more clarity? And it involves demoting the computer.

There’s a lot of work to do technically. We can create explanations for what so-called “machine intelligence” is doing by tracing it back to its human origins. There have been a number of very famous instances of chatbots getting really weird with people. But the form of explanation should be to say, “Actually, the bot was, at that point, parodying something from a soap opera, or from some fanfiction.” That’s what’s going on. And in my opinion, there should be an economy in the future where, if there’s really valuable output from an AI, the people whose contributions were particularly important should actually get paid. I believe there’s a new extension to society that’s very creative and interesting, rather than this dismal prospect of everybody being put out of work. Transparency in mash-up technology can only come from revealing the people whose expressions were mashed-up. But if policies are based on the idea that we now have this new “supernatural artificial entity”, there’s no sensible way to resolve that.

FR: You didn’t sign the open letter demanding a hiatus in accelerating AI development, which was signed by Elon Musk and Sam Altman. Was that not appealing to you as an idea?

JL: My reason for not signing it is that it fundamentally still mystified the technology. It took the position that it’s a new alien entity. When you think it’s an alien entity, there’s no way to know how to help. If you have an alien entity, what regulation is good? How do you define harm? As much as the GPT programmes impress people, they don’t actually represent ideas. We don’t know how to define these things, all we can do is mash things up so that they conform with classifiers.

FR: So they can’t do the philosophical work of thinking?

JL: They could mash-up philosophers in ways that might be interesting. If you say, “Write an essay as if Descartes and Derrida collaborated”, something might come out that’s provocative or interesting. But there’s no actual representation inside there. And getting provocative or interesting mash-ups is useful, but you can’t set policy by it because there’s not actually any meaning. There’s no fundamental representation inside these things and we just have to accept that as reality. We don’t know what meaning is and we can’t represent meaning.

FR: Your argument relies on the idea that if we define this technology differently, then we will have more power over it, or at least we’ll have more understanding of it. Are we not just self-comforting here with a rhetoric about it being a human technology rather than something we can’t control?

JL: I don’t think that’s the case. It’s proposing a more concrete and clarified path of action. It’s very demanding of people and it’s not comforting at all. It demands that everybody involved on a technical or regulatory level do much more than they have. I suspect many people would prefer the mystical version because it actually lets them off the hook. The mystical version just lets you sit there and apprehend, and express awe at our own inventions. What I’m talking about demands action. It’s not comforting and it shouldn’t be.

FR: Do you think humans need to take more accountability for their part in developing a potentially malign form of AI? If it does go off the rails, wouldn’t it be because we’ve set it up to do so?

JL: One comparison is the disasters with the Boeing 737 Max. The flight correction module in it was the source of two terrible air disasters in which hundreds of people died. But what actually happened involved the way they sold it, the way they withheld information about it (depending on how much you paid them), the way they trained people for it, the way the documentation was created. It’s the surrounding stuff that created the disaster, not the core capability, which probably has been useful in general. In the same way, with this large-model AI, it’s not the thing itself, it’s the surrounding material that determines whether it’s malignant or not.

When you deploy it, under the assumption that it’s an alien new intelligence — that it’s a new entity with its own point of view that should be treated as a creature instead of a tool — you greatly increase the chances of a scenario similar to the one that that befell passengers on the Boeing planes. I think that’s a real possibility. The malignancy is in the surrounding material, not in the core technology, and that’s extremely important to understand. I don’t think anybody has claimed that the flight path correction module shouldn’t have existed. I think what people are saying is that the pilots should have been well-informed, well-trained, and the ability to control it should have always been included, not only for those who paid more. And if you have chatbots, and you tell people, “This is an intelligent companion, you should be able to date it, you should be able to trust it”, then the chances of something really bad happening increase.

FR: Isn’t the main worry then that this sort of technology might fall into the hands of someone who has malign intent against a group or country. I’m thinking particularly about the situation in Ukraine.

JL: Russia has one of the worst records on misusing the internet and algorithms. It’s documented that Russia created enormous numbers of fake accounts, of fake bots, in order to sow divisions within the US. And of course, it’s attempting those things in Ukraine. I worry a little bit more about China, because Russia doesn’t quite have the resources to pull off very large-model projects right now. It’s not that easy to do — you need huge computational resources. So I worry a little bit about China using, to be very blunt, data from TikTok on the morning of a Taiwan invasion or something like that. That’s imaginable. I’ve talked to a lot of people in the Chinese world, and I think almost all are actually much more conscientious and better-intentioned than we might imagine, but there’s always somebody in any country in any situation. I do worry about it, and the antidote to it is universal clarity, context, transparency, which can only come about by revealing people, since revealing ideas is impossible because we don’t know what an idea is.

FR: We’ve established though that we already live with artificial intelligence. How has that already changed us?

JL: Our principal encounter with algorithms so far has been in the construction of the feeds we receive in our apps. It’s in whether we get credit or not and other things like that — whether we get admitted to university or not, or whether we’re sent to prison, depending on what country we’re talking about. Algorithms have transformed us. I would hope that the criticisms of them that I and many others — Tristan Harris, Shoshana Zuboff — have put forward have illuminated and clarified the issues with algorithms in the previous generation. But what could happen with the new AI is a worse version of all of that. Given how bad that was, I don’t think the doomerists are entirely wrong. I think we could confuse ourselves into extinction with our own code. But, once again, in order for us to achieve that level of stupidity, we have to believe overly in the intelligence of the software, and I think we have a choice.

FR: You’re a composer as well as a computer scientist. Do you think that there is going to be a shift in the way in which we prioritise organic or manmade art?

JL: We are entering a world of what I call “data dignity”. A musician might provide music directly, or might provide antecedent music that’s mashed-up within an algorithm, but that musician is still known and credited. And we’ve seen that already for decades now — somebody might provide the beats, somebody else might provide samples, etc. There’s already this sense of construction and mash-up, especially in hip-hop, but also just in pop music lately. That has not destroyed musicians, not as long as it’s acknowledged and transparent. I think, as with Boeing, it’s the surrounding material. If we choose to use mash-up algorithms to hide the people from whom the antecedent stuff came from, then we do damage. But the thing doing the damage is hiding ourselves, not the algorithm itself, which is actually just a simple dumb thing. I think there are a lot of good things about an algorithmic mash-up culture in the future. Every new instance of automation, instead of putting people out of work, could be thought of as the platform for a new creative community.

FR: Won’t that dull our eyes to the beauty of real art and culture?

JL: What I see in culture is, as long as people understand what’s going on, they find their way. Synthesisers haven’t killed violins. There was a fear that they would, and as long as people know the difference, as long as there’s honesty and transparency about what’s going on, we can go through seasons of things being a little more artificial and then less so. That becomes a cultural dynamic and I trust people to handle that well.

FR: I might sound a bit like someone booing at Bob Dylan going electric, but, if you take Spotify, it’s almost totally wiped out independent music. There have been major technological advances in music that have obliterated creativity at those lower, more maverick levels of the industry.

JL: You’re absolutely correct about Spotify. In fact, at the dawn of the file-copying era, I objected very strenuously to this idea. There was a cultural movement about open source and open culture, which was stealthily funded by Google and other tech companies, and the Pirate Parties in Europe. People thought everything should be a mash up and we didn’t need to know who the musician was and they didn’t need to have bargaining power in a financial transaction. That was a gigantic wrong turn, and it was a wrong turn that we can’t afford to repeat with AI because it becomes amplified so much that it could really destroy technology. I completely agree with you about Spotify but, once again, the availability of music to move through the internet was not the problem. It’s the surrounding material. What really screwed over musicians was not the core capability, but this idea that you build a business model on demoting the musician, demoting the person, and instead elevating the hub or the platform. And so we can’t afford to keep on doing that. I think that is the road that leads to our potential extinction through insanity.

FR: It sounds like the answer to a lot of these problems comes down to human greed?

JL: I think humans are definitely responsible. Greed is one aspect of it, but it’s not all of it. I don’t necessarily understand all human failings within myself or anybody else, but I do feel we can articulate ways to approach this that are more practical, more actionable and more hopeful. That has to be our first duty. I think this question of diagnosing each other and saying, “This person has womb envy”, or whatever has some utility, but not a lot, and can inspire reactions that aren’t helpful. So I don’t want to emphasise that too much. I want to emphasise an approach, which we can call “data dignity”, and which opens options for us and makes things clearer.

FR: What is the best case scenario if we follow that route?

JL: What I like about the new algorithms is that they help us collaborate better. You could have a new and more flexible kind of a computer, where you can ask it to change the way you present things to match your mood or your cognition under your own control, so that you’re less subservient to the computer. But another thing you can do is you can say, “I have written one essay, my friend’s written another essay, they’re sort of different. Can you mash them up 12 different ways so we can read the mash-ups?” And this is not based on ideas, it’s based on the dumb math of combining words as they appeared, in order, in context. But you might be able to learn new options for consilience between different points of view that way, which could be extraordinary. Many people have been looking at the humanistic AI world, the human-centred AI world, and asking, “Could we actually use this to help us understand potential for cooperation and policy that we might not see?”

FR: So, oddly, it might break us out of our tribes and offer some human connection?

JL: It’s like if a therapist says, “Try using different words and see if that might change how you think about something.” It’s not directly addressing our thoughts, but on the surface level it actually can help us. But it’s ultimately up to us, and there’s no guarantee it’ll help, but I believe it will in many cases. It can help us improve our research, it can help us improve a lot of our practices, and, as long as we acknowledge the people whose ideas are being matched up by the programmes, it can help us even broaden participation in the economy, instead of throwing people out of work as so often foretold. I think we can use this stuff to our advantage, and it’s worth it to try. If we try to use it well, the most awful ideas about it turning into the Matrix or Terminator, become vanishingly unlikely, as long as we treat it as a human project instead of an alien intelligence.


is UnHerd’s Senior Producer and Presenter for UnHerd TV.


Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

48 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
AJ Mac
AJ Mac
1 year ago

Lanier is a genuine original who neither follows the herd nor places himself apart in a self-important way. Since reading his book: Who Owns the Future (2013), I have admired his combination of independent thinking and social conscientiousness. I think his notion of a collective or “human-infused” machine intelligence, instead of an A.I. that is merely alien and alarming, is useful.
His good-guy-on-the-inside perspective has made the tech present and near future appear slightly less dystopic to me. And I respect the clarification that he is not “anti-doomist”. By rejecting the easy reassurances of the most tech-sanguine and the debilitating panic of the most tech-adverse, Lanier places the onus upon us, insisting that the digital devils and digital angels are both in the details. We are not without agency, responsibility, or hope.

Last edited 1 year ago by AJ Mac
J Bryant
J Bryant
1 year ago
Reply to  AJ Mac

That’s a helpful summary. I listened to the interview and struggled to figure out his view of AI.
For example, he suggests we shouldn’t be too impressed by what AI such as ChatGPT is doing because, in his expert opinion, it’s just using fairly simple algorithms to make mashups of items stored in a catalogue of categories of information. That sounds simple and harmless, but the end product is quite disconcerting and potentially original. I wonder if all AI experts would agree with his benign characterization of the technology behind ChatGPT?
An interesting idea he raised was financially compensating the creators of the original works that are mashed up by AI such as ChatGPT. That would be one way of ensuring humans aren’t economically displaced by AI. Existing copyright law should be able to accomodate that type of arrangement because, arguably, the product of a ChatGPT mash up is a “derivative work” of original copyright protected material, and so is protected by copyright. I do think he’s a bit optimistic in this regard, though. He currently works for Microsoft, one of the great monopolists of the modern era. I don’t see companies like Microsoft rushing to compensate human creators of any work utilized, however indirectly, by its ChatGPT type of AI.
The most useful idea, for me, from this interview is the one you mentioned in your comment: he reminds us that the latest AI are not magical beings. They are relatively simple forms of technology and humans control their design and use. Neither awe or panic are appropriate.
The main issue not directly addressed in this interview, so far as I can tell, is consciousness. The human brain is composed of neurons which are binary in function: they either fire (electrically depolarize) or they don’t. But from that simple beginning, the remarkable phenomenon of consciousness arises when sufficient neurons work together. Isn’t it possible something similar will happen with more advanced AI? That idea no longer seems to be pure science fiction, at least not to me.

AJ Mac
AJ Mac
1 year ago
Reply to  J Bryant

Well I had an existing esteem for and familiarity with Lanier, so I’m probably crediting him with more nuance and thoughtfulness than is evident in the above interview itself.
I’m not convinced that true consciousness–something that has yet to be explained or “explained away”–can be achieved by a machine. But that becomes a bit beside the point if machines develop a destructive or malevolent purpose, whether on their own or with malevolent/heedless human guidance.
Lanier’s point about not abdicating our measure of individual choice or collective control and responsibility is key. I find something insightful in his diagnosis of a combined alien distancing and boyish sci-fi fascination that is making things worse when it comes to this stuff. We need to face this rising weirdness in a less mystified or terrified way, if only for practical reasons.

Steve Murray
Steve Murray
1 year ago
Reply to  J Bryant

Regarding the firing of neurons, i don’t know enough about the biochemistry of the brain but i have a strong suspicion that a great deal depends on the wider biochemical entity within which it’s contained, i.e. the human body, with a sensory input which the brain orientates. What would be the equivalent to the type of sensory input a human being receives, and which may be the definitive factor in consciousness?
It’s more than possible that AI would be unable to develop consciousness of itself. It should also be possible to develop AI to specifically avoid that becoming a possibility in the future.

AJ Mac
AJ Mac
1 year ago
Reply to  J Bryant

Well I had an existing esteem for and familiarity with Lanier, so I’m probably crediting him with more nuance and thoughtfulness than is evident in the above interview itself.
I’m not convinced that true consciousness–something that has yet to be explained or “explained away”–can be achieved by a machine. But that becomes a bit beside the point if machines develop a destructive or malevolent purpose, whether on their own or with malevolent/heedless human guidance.
Lanier’s point about not abdicating our measure of individual choice or collective control and responsibility is key. I find something insightful in his diagnosis of a combined alien distancing and boyish sci-fi fascination that is making things worse when it comes to this stuff. We need to face this rising weirdness in a less mystified or terrified way, if only for practical reasons.

Steve Murray
Steve Murray
1 year ago
Reply to  J Bryant

Regarding the firing of neurons, i don’t know enough about the biochemistry of the brain but i have a strong suspicion that a great deal depends on the wider biochemical entity within which it’s contained, i.e. the human body, with a sensory input which the brain orientates. What would be the equivalent to the type of sensory input a human being receives, and which may be the definitive factor in consciousness?
It’s more than possible that AI would be unable to develop consciousness of itself. It should also be possible to develop AI to specifically avoid that becoming a possibility in the future.

UnHerd Reader
UnHerd Reader
1 year ago
Reply to  AJ Mac

I think he is Mega Creepy. Like there is no human inside the shell.

I’ve proposed of not thinking of what we’re doing as alien intelligences but rather a social collaboration, is the way through that problem,

Just as Frankenstein’s monster was not an alien thing – but just bits of us stuck together….. just a misunderstood assemblage of us all… how sweet…

NO!

At a fundamental level I run into a brick wall with AI. This guy is an Atheist, he mentioned Derrida, the father of Postmodernism (with Foucault and Frankfurt) which to me is pure Satanism, as it is the opposite of God. He mentions Pascal’s Wager and tosses it aside…..I wish he had more humility…

like Oppenheimer, Secular but consumed by Ultimate questions, agnostic:

“Now I Am Become Death, the Destroyer of Worlds”” was how Oppenheimer said it. I think his work was but a speck compared to this justifyer of the evil AI will do to humanity.

God created man with Free Will, and so we can turn to ultimate evil if we choose – BUT we also have great good, and always redemption possible, as we are made from God; Ultimate good.

When we create – lets call it life for ease of thinking – we are not all good. Most with their hand in this are the atheist, the Modernist, the basically Nhilos existentialist. As Idle Hands are the Devils Workshop – so many more times are hands that would play at being divine, at creating life, because without the protection and faith of God guiding us – Satan will step into that roll, and thus AI will inevitably be a product of darkness.

Satan’s Greatest Power is his ability to make humans doubt his existence. This man and his ilk are 100% fooled by this – they reinforce their disdain for religion, they are completely acolytes of Screwtape.

God Help us, these men build, and will release, ultimate evil on the world.

AJ Mac
AJ Mac
1 year ago
Reply to  UnHerd Reader

I allow that he is a strange guy in some ways, but far from evil or inhuman. Listen to him speak for 5 minutes if you can suspend your condemnation for that long. I don’t think denying the humanity of fellow humans, or judging others according to the least generous measure, is a godly path in any major tradition, especially from a Gospel perspective.

Using your own science fiction analogy: Is it better to shudder in horror after Frankenstein’s monster is brought to misshapen life, or take a responsible role in preventing the birth of such a creature? Or, if the ill-favored thing be alive already, to confront and disable it or react with denunciation and weeping and gnashing of teeth?

AJ Mac
AJ Mac
1 year ago
Reply to  UnHerd Reader

I allow that he is a strange guy in some ways, but far from evil or inhuman. Listen to him speak for 5 minutes if you can suspend your condemnation for that long. I don’t think denying the humanity of fellow humans, or judging others according to the least generous measure, is a godly path in any major tradition, especially from a Gospel perspective.

Using your own science fiction analogy: Is it better to shudder in horror after Frankenstein’s monster is brought to misshapen life, or take a responsible role in preventing the birth of such a creature? Or, if the ill-favored thing be alive already, to confront and disable it or react with denunciation and weeping and gnashing of teeth?

J Bryant
J Bryant
1 year ago
Reply to  AJ Mac

That’s a helpful summary. I listened to the interview and struggled to figure out his view of AI.
For example, he suggests we shouldn’t be too impressed by what AI such as ChatGPT is doing because, in his expert opinion, it’s just using fairly simple algorithms to make mashups of items stored in a catalogue of categories of information. That sounds simple and harmless, but the end product is quite disconcerting and potentially original. I wonder if all AI experts would agree with his benign characterization of the technology behind ChatGPT?
An interesting idea he raised was financially compensating the creators of the original works that are mashed up by AI such as ChatGPT. That would be one way of ensuring humans aren’t economically displaced by AI. Existing copyright law should be able to accomodate that type of arrangement because, arguably, the product of a ChatGPT mash up is a “derivative work” of original copyright protected material, and so is protected by copyright. I do think he’s a bit optimistic in this regard, though. He currently works for Microsoft, one of the great monopolists of the modern era. I don’t see companies like Microsoft rushing to compensate human creators of any work utilized, however indirectly, by its ChatGPT type of AI.
The most useful idea, for me, from this interview is the one you mentioned in your comment: he reminds us that the latest AI are not magical beings. They are relatively simple forms of technology and humans control their design and use. Neither awe or panic are appropriate.
The main issue not directly addressed in this interview, so far as I can tell, is consciousness. The human brain is composed of neurons which are binary in function: they either fire (electrically depolarize) or they don’t. But from that simple beginning, the remarkable phenomenon of consciousness arises when sufficient neurons work together. Isn’t it possible something similar will happen with more advanced AI? That idea no longer seems to be pure science fiction, at least not to me.

UnHerd Reader
UnHerd Reader
1 year ago
Reply to  AJ Mac

I think he is Mega Creepy. Like there is no human inside the shell.

I’ve proposed of not thinking of what we’re doing as alien intelligences but rather a social collaboration, is the way through that problem,

Just as Frankenstein’s monster was not an alien thing – but just bits of us stuck together….. just a misunderstood assemblage of us all… how sweet…

NO!

At a fundamental level I run into a brick wall with AI. This guy is an Atheist, he mentioned Derrida, the father of Postmodernism (with Foucault and Frankfurt) which to me is pure Satanism, as it is the opposite of God. He mentions Pascal’s Wager and tosses it aside…..I wish he had more humility…

like Oppenheimer, Secular but consumed by Ultimate questions, agnostic:

“Now I Am Become Death, the Destroyer of Worlds”” was how Oppenheimer said it. I think his work was but a speck compared to this justifyer of the evil AI will do to humanity.

God created man with Free Will, and so we can turn to ultimate evil if we choose – BUT we also have great good, and always redemption possible, as we are made from God; Ultimate good.

When we create – lets call it life for ease of thinking – we are not all good. Most with their hand in this are the atheist, the Modernist, the basically Nhilos existentialist. As Idle Hands are the Devils Workshop – so many more times are hands that would play at being divine, at creating life, because without the protection and faith of God guiding us – Satan will step into that roll, and thus AI will inevitably be a product of darkness.

Satan’s Greatest Power is his ability to make humans doubt his existence. This man and his ilk are 100% fooled by this – they reinforce their disdain for religion, they are completely acolytes of Screwtape.

God Help us, these men build, and will release, ultimate evil on the world.

AJ Mac
AJ Mac
1 year ago

Lanier is a genuine original who neither follows the herd nor places himself apart in a self-important way. Since reading his book: Who Owns the Future (2013), I have admired his combination of independent thinking and social conscientiousness. I think his notion of a collective or “human-infused” machine intelligence, instead of an A.I. that is merely alien and alarming, is useful.
His good-guy-on-the-inside perspective has made the tech present and near future appear slightly less dystopic to me. And I respect the clarification that he is not “anti-doomist”. By rejecting the easy reassurances of the most tech-sanguine and the debilitating panic of the most tech-adverse, Lanier places the onus upon us, insisting that the digital devils and digital angels are both in the details. We are not without agency, responsibility, or hope.

Last edited 1 year ago by AJ Mac
Brian Villanueva
Brian Villanueva
1 year ago

I have coded some basic machine learning systems (capable of playing checkers or minesweeper — nothing major). That’s what bored retired geeks do for fun, that and build Star Wars droids.
This article is right on the money. “Intelligence” is the wrong word. Imagine something with the knowledge of the entire library of Congress but the reasoning ability of a chihuahua. That’s “deep learning”. That’s ChatGPT — why do you think it makes up stuff? “True” and “False” are just bit states; they don’t mean anything. In some sense, GPT3 (and almost certainly 4, though I haven’t seen it) is fully postmodern; it has no idea that there even is a “real world” for its words to describe.
Excellent interview and a very much needed perspective.

Prashant Kotak
Prashant Kotak
1 year ago

“…the reasoning ability of a chihuahua…”

If you play chess with AlphaZero, it will beat you every time. It will in fact beat every human chess player on earth every time. Note that AlphaZero does not hold a vast dictionary of moves and responses, such that it can look up the perfect response to any possible move anyone can make.

So how do you think it can beat you without being able to reason?

Last edited 1 year ago by Prashant Kotak
Gordon Black
Gordon Black
1 year ago
Reply to  Prashant Kotak

There is little or no reasoning in chess, it’s just pattern recognition, just sophisticated noughts and crosses. And that means when both players are equal, the result is always a draw. So AlphaZeros playing each other at these kind of games would eternally draw because no reason is involved.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Gordon Black

By that reasoning you can expect the emergence of some species, say a variety of insect swarm, with no reasoning at all, but which evolved to play noughts and crosses, but can also play great chess when put on a chessboard. Let me know when you come across such a species. Other than humanity, that is.

Also, it is not a given that perfect chess by both players ends in a draw. It is likely this is the case, but it is also possible that with perfect play white wins.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Gordon Black

By that reasoning you can expect the emergence of some species, say a variety of insect swarm, with no reasoning at all, but which evolved to play noughts and crosses, but can also play great chess when put on a chessboard. Let me know when you come across such a species. Other than humanity, that is.

Also, it is not a given that perfect chess by both players ends in a draw. It is likely this is the case, but it is also possible that with perfect play white wins.

Gordon Black
Gordon Black
1 year ago
Reply to  Prashant Kotak

There is little or no reasoning in chess, it’s just pattern recognition, just sophisticated noughts and crosses. And that means when both players are equal, the result is always a draw. So AlphaZeros playing each other at these kind of games would eternally draw because no reason is involved.

Benjamin Greco
Benjamin Greco
1 year ago

They don’t reason at all, but that doesn’t mean the technology is not incredibly dangerous.
AI Is About to Make Social Media Much More Toxic – The Atlantic

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Benjamin Greco

Sigh.
Let’s establish some terms then.
What do you understand by ‘Reasoning’?
Do you think anything other than humans reason? If so where are your demarcations? As in, do you think chimpanzees reason? Do you think viruses reason?
How exactly do you think human reasoning differs from algorithmic processing?
Do you think that reasoning is something mysterious? Do you think it’s something religious?

Benjamin Greco
Benjamin Greco
1 year ago
Reply to  Prashant Kotak

Sigh is right.

Benjamin Greco
Benjamin Greco
1 year ago
Reply to  Prashant Kotak

Sigh is right.

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  Benjamin Greco

Sigh.
Let’s establish some terms then.
What do you understand by ‘Reasoning’?
Do you think anything other than humans reason? If so where are your demarcations? As in, do you think chimpanzees reason? Do you think viruses reason?
How exactly do you think human reasoning differs from algorithmic processing?
Do you think that reasoning is something mysterious? Do you think it’s something religious?

Prashant Kotak
Prashant Kotak
1 year ago

“…the reasoning ability of a chihuahua…”

If you play chess with AlphaZero, it will beat you every time. It will in fact beat every human chess player on earth every time. Note that AlphaZero does not hold a vast dictionary of moves and responses, such that it can look up the perfect response to any possible move anyone can make.

So how do you think it can beat you without being able to reason?

Last edited 1 year ago by Prashant Kotak
Benjamin Greco
Benjamin Greco
1 year ago

They don’t reason at all, but that doesn’t mean the technology is not incredibly dangerous.
AI Is About to Make Social Media Much More Toxic – The Atlantic

Brian Villanueva
Brian Villanueva
1 year ago

I have coded some basic machine learning systems (capable of playing checkers or minesweeper — nothing major). That’s what bored retired geeks do for fun, that and build Star Wars droids.
This article is right on the money. “Intelligence” is the wrong word. Imagine something with the knowledge of the entire library of Congress but the reasoning ability of a chihuahua. That’s “deep learning”. That’s ChatGPT — why do you think it makes up stuff? “True” and “False” are just bit states; they don’t mean anything. In some sense, GPT3 (and almost certainly 4, though I haven’t seen it) is fully postmodern; it has no idea that there even is a “real world” for its words to describe.
Excellent interview and a very much needed perspective.

Saul D
Saul D
1 year ago

For me a missing question is what do humans want to do? By that I’m thinking of how we self-actualise and do stuff that we like – sitting on the beach contemplating the waves, laughing with friends over wine, feeling the wind and holding the hand of someone you love.
The ‘machine’ of modern life tells us to do things – cross on green, stop on red, file taxes, wake up and go to work, fill in the form, pay the bill, take the class. If all that AI does is enhances the machine – more controls, more observed behaviour, more monetisation of stuff that is currently free – then it will be bad.
If AI holds the machine at bay by liberating us from the machine’s demands then it becomes beneficial, because it allows us to be more human by relieving the burdens the modern machine imposes on us.

Last edited 1 year ago by Saul D
Saul D
Saul D
1 year ago

For me a missing question is what do humans want to do? By that I’m thinking of how we self-actualise and do stuff that we like – sitting on the beach contemplating the waves, laughing with friends over wine, feeling the wind and holding the hand of someone you love.
The ‘machine’ of modern life tells us to do things – cross on green, stop on red, file taxes, wake up and go to work, fill in the form, pay the bill, take the class. If all that AI does is enhances the machine – more controls, more observed behaviour, more monetisation of stuff that is currently free – then it will be bad.
If AI holds the machine at bay by liberating us from the machine’s demands then it becomes beneficial, because it allows us to be more human by relieving the burdens the modern machine imposes on us.

Last edited 1 year ago by Saul D
Prashant Kotak
Prashant Kotak
1 year ago

Lanier is in effect suggesting that the machine intelligence we are creating is not an alien externality independent of us, but rather a collective product of the shadow cast by our intelligence – a “mash-up”. Further, he is suggesting that we as humanity could potentially spook ourselves into extinction, not by machine intelligence in and of itself, but by our reaction to it.

Smart as Lanier is, I think he is profoundly wrong. He is right that the technologies are a product of *us*, but to my eyes the way we view machine intelligence specifically, is almost completely irrelevant to it’s ultimate consequences once we create it. That, being brutally direct, is because the latest advances in neural net technologies imply the *emergence of intelligences that are independent of us*. Moreover, once they can mimic every single characteristic of human sentience such that we cannot tell their responses apart from human ones, that level of capability ipso facto includes what will look to all intents and purposes like agency to us. At that point you can shout until you are blue in the face that your creations are about as sentient as a rock, and it won’t matter – if they want to kill you, they will kill you. As Lainer states, the experience of qualia of all sentient entities is hermetically sealed, so you cannot comprehend what others experience except as an act of faith.

Lanier’s stance is the equivalent of saying that the way parents might view their potentially murderously psychotic progeny influences the ultimate actions of that progeny. Well, yes and no. The way parents bring up their progeny typically has a huge influence on how they turn out (in aggregate), but not absolutely always. It is demonstrably the case that it is possible for the parents to behave completely normally, or even beyond that chancelessly perfectly, but the progeny nevertheless emerge as destructive.

But the real problem to my eyes is two fold. Firstly, the machine intelligences we are creating is via technological routes that are not in a direct line with biological evolution, they are instead short-circuiting biological evolution completely, so they are ultimately going to be genuinely alien, notwithstanding that they are currently fed and trained on vast quantities of human data. The second problem is much bigger and much more fundamental: we are on the path to creating artificial intelligence that is much smarter than us. This ultimately means it will possess a greater degree of sentience than us, and therefore see things within nature and reality that mechanically lie beyond our comprehension as we currently are. There is simply no version of those outcomes where we can remain as masters of our world: either extinction or zoodom beckons. The only way out of this is insanely dangerous and likely horrible and odious to very many: incorporate the emerging AI technologies within us, as in into our biological selves.

Last edited 1 year ago by Prashant Kotak
AJ Mac
AJ Mac
1 year ago
Reply to  Prashant Kotak

In my view, your belief in the certainty of enslavement or extinction by machine is itself a form of faith, in the apocalyptic sense. “Once they can mimic every aspect of human sentience”…a bridge that will not necessarily ever be crossed. We do not understand and cannot de-mystify human sentience or consciousness and there is no evidence or even persuasive indication that we ever will–except for a kind of hyper-rational faith that makes us into gods of manufactured creation, while at the same time reducing us to machines that you are convinced will prove inferior to the machines we imbue, willingly or not, with the spirit of sentience (which we do not understand or control enough to know that it can ever be artificially replicated in a real sense).
An object that mimics reason or chess strategy does not possess it in the malleable and adaptable sense. Neither do we autonomously or willfully govern our own bodily faculties and neural networks. But we have a spark of something that remains beyond our true understanding, if not our astonishing hubris.
“The only way out of this is insanely dangerous and likely horrible and odious to very many: incorporate the emerging AI technologies within us, as in into our biological selves”. Yes, this is odious, and deeply mistaken. The self-admittedly insane and dangerous “solution” you endorse is not the only way out or any way out at all, but the only one you can envision, at least within your statement of cataclysmic faith, which you frame as a series of inevitabilities. .

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  AJ Mac

“…An object that mimics reason or chess strategy does not possess it in the malleable and adaptable sense…”

I suspect you will be saying this, literally right up to the moment at which GPT-9 kills us all. As in: “You can’t kill me, you’re not sentient… arrrgh!… but how??… you’re not sentient!! … Ughhh”.

Thud.

Silence evermore.

Last edited 1 year ago by Prashant Kotak
AJ Mac
AJ Mac
1 year ago
Reply to  Prashant Kotak

Then your godless faith will be confirmed. You seem almost eager to see this occur. I guess you’ll already be physically wired into “machine sentience”?

Richard Ross
Richard Ross
1 year ago
Reply to  AJ Mac

Boys, boys, settle down, LOL. Both of you make excellent points, and with much more clarity than the original interviewee, above.
To me, it seems the greatest danger, and the silliest, is the application of human rights to an AI machine, as if consciousness can be imbued by adding more and more features to my toaster. Once we start respecting the Thing, instead of using it, it’s over.

Last edited 1 year ago by Richard Ross
Richard Ross
Richard Ross
1 year ago
Reply to  AJ Mac

Boys, boys, settle down, LOL. Both of you make excellent points, and with much more clarity than the original interviewee, above.
To me, it seems the greatest danger, and the silliest, is the application of human rights to an AI machine, as if consciousness can be imbued by adding more and more features to my toaster. Once we start respecting the Thing, instead of using it, it’s over.

Last edited 1 year ago by Richard Ross
AJ Mac
AJ Mac
1 year ago
Reply to  Prashant Kotak

Then your godless faith will be confirmed. You seem almost eager to see this occur. I guess you’ll already be physically wired into “machine sentience”?

Prashant Kotak
Prashant Kotak
1 year ago
Reply to  AJ Mac

“…An object that mimics reason or chess strategy does not possess it in the malleable and adaptable sense…”

I suspect you will be saying this, literally right up to the moment at which GPT-9 kills us all. As in: “You can’t kill me, you’re not sentient… arrrgh!… but how??… you’re not sentient!! … Ughhh”.

Thud.

Silence evermore.

Last edited 1 year ago by Prashant Kotak
AJ Mac
AJ Mac
1 year ago
Reply to  Prashant Kotak

In my view, your belief in the certainty of enslavement or extinction by machine is itself a form of faith, in the apocalyptic sense. “Once they can mimic every aspect of human sentience”…a bridge that will not necessarily ever be crossed. We do not understand and cannot de-mystify human sentience or consciousness and there is no evidence or even persuasive indication that we ever will–except for a kind of hyper-rational faith that makes us into gods of manufactured creation, while at the same time reducing us to machines that you are convinced will prove inferior to the machines we imbue, willingly or not, with the spirit of sentience (which we do not understand or control enough to know that it can ever be artificially replicated in a real sense).
An object that mimics reason or chess strategy does not possess it in the malleable and adaptable sense. Neither do we autonomously or willfully govern our own bodily faculties and neural networks. But we have a spark of something that remains beyond our true understanding, if not our astonishing hubris.
“The only way out of this is insanely dangerous and likely horrible and odious to very many: incorporate the emerging AI technologies within us, as in into our biological selves”. Yes, this is odious, and deeply mistaken. The self-admittedly insane and dangerous “solution” you endorse is not the only way out or any way out at all, but the only one you can envision, at least within your statement of cataclysmic faith, which you frame as a series of inevitabilities. .

Prashant Kotak
Prashant Kotak
1 year ago

Lanier is in effect suggesting that the machine intelligence we are creating is not an alien externality independent of us, but rather a collective product of the shadow cast by our intelligence – a “mash-up”. Further, he is suggesting that we as humanity could potentially spook ourselves into extinction, not by machine intelligence in and of itself, but by our reaction to it.

Smart as Lanier is, I think he is profoundly wrong. He is right that the technologies are a product of *us*, but to my eyes the way we view machine intelligence specifically, is almost completely irrelevant to it’s ultimate consequences once we create it. That, being brutally direct, is because the latest advances in neural net technologies imply the *emergence of intelligences that are independent of us*. Moreover, once they can mimic every single characteristic of human sentience such that we cannot tell their responses apart from human ones, that level of capability ipso facto includes what will look to all intents and purposes like agency to us. At that point you can shout until you are blue in the face that your creations are about as sentient as a rock, and it won’t matter – if they want to kill you, they will kill you. As Lainer states, the experience of qualia of all sentient entities is hermetically sealed, so you cannot comprehend what others experience except as an act of faith.

Lanier’s stance is the equivalent of saying that the way parents might view their potentially murderously psychotic progeny influences the ultimate actions of that progeny. Well, yes and no. The way parents bring up their progeny typically has a huge influence on how they turn out (in aggregate), but not absolutely always. It is demonstrably the case that it is possible for the parents to behave completely normally, or even beyond that chancelessly perfectly, but the progeny nevertheless emerge as destructive.

But the real problem to my eyes is two fold. Firstly, the machine intelligences we are creating is via technological routes that are not in a direct line with biological evolution, they are instead short-circuiting biological evolution completely, so they are ultimately going to be genuinely alien, notwithstanding that they are currently fed and trained on vast quantities of human data. The second problem is much bigger and much more fundamental: we are on the path to creating artificial intelligence that is much smarter than us. This ultimately means it will possess a greater degree of sentience than us, and therefore see things within nature and reality that mechanically lie beyond our comprehension as we currently are. There is simply no version of those outcomes where we can remain as masters of our world: either extinction or zoodom beckons. The only way out of this is insanely dangerous and likely horrible and odious to very many: incorporate the emerging AI technologies within us, as in into our biological selves.

Last edited 1 year ago by Prashant Kotak
D Glover
D Glover
1 year ago

What really screwed over musicians was not the core capability, but this idea that you build a business model on demoting the musician, demoting the person, and instead elevating the hub or the platform. 

I recommend the song ‘Everything is free now‘ by Gillian Welch. Both her original and an excellent cover by Father John Misty can be found on YouTube. That’s really ironic, because the song is about the way musicians are forced to give away their work for nothing, and that’s what they’re getting if you listen on YT.

Steve Murray
Steve Murray
1 year ago
Reply to  D Glover

I found it interesting that Lanier avoided answering the actual question, which was:
You’re a composer as well as a computer scientist. Do you think that there is going to be a shift in the way in which we prioritise organic or manmade art?
Instead, he set out what you’ve described, which was the financial side of creativity. I saw the question as being much more about prioritising those artistic works specifically made by humans over and above those made through AI.

AJ Mac
AJ Mac
1 year ago
Reply to  D Glover

Beautiful reminder to listen a good song from a good artist. I don’t know that your recommendation is 100% germane to this strange conversation–but it was welcome. I’m quite sure that good taste and true artistic agency are in no imminent danger of being co-opted by machines.

Steve Murray
Steve Murray
1 year ago
Reply to  D Glover

I found it interesting that Lanier avoided answering the actual question, which was:
You’re a composer as well as a computer scientist. Do you think that there is going to be a shift in the way in which we prioritise organic or manmade art?
Instead, he set out what you’ve described, which was the financial side of creativity. I saw the question as being much more about prioritising those artistic works specifically made by humans over and above those made through AI.

AJ Mac
AJ Mac
1 year ago
Reply to  D Glover

Beautiful reminder to listen a good song from a good artist. I don’t know that your recommendation is 100% germane to this strange conversation–but it was welcome. I’m quite sure that good taste and true artistic agency are in no imminent danger of being co-opted by machines.

D Glover
D Glover
1 year ago

What really screwed over musicians was not the core capability, but this idea that you build a business model on demoting the musician, demoting the person, and instead elevating the hub or the platform. 

I recommend the song ‘Everything is free now‘ by Gillian Welch. Both her original and an excellent cover by Father John Misty can be found on YouTube. That’s really ironic, because the song is about the way musicians are forced to give away their work for nothing, and that’s what they’re getting if you listen on YT.

Benjamin Greco
Benjamin Greco
1 year ago

I consider Lanier won of the smartest computer guys around but on this subject, he is being unbelievable naive. To blithely say, well we could choose to use the technology destructively when he knows damn well the likelihood is that we will is willful naivete at best and willful baloney meant to promote Microsoft technology at worst. When someone makes a point of telling you they are paid by a company but speak for themselves you should be suspicious of their intentions despite their gnome like smile. Lanier has been harping on micropayments for people who provide data to social media companies for a while now and it hasn’t come close to being a reality. Profit will determine how AI technology is used not some utopian vision by a self-appointed guru.
I don’t think AI heralds our extinction but Lanier’s pie in the sky visions will only happen when pies fly!

Benjamin Greco
Benjamin Greco
1 year ago

I consider Lanier won of the smartest computer guys around but on this subject, he is being unbelievable naive. To blithely say, well we could choose to use the technology destructively when he knows damn well the likelihood is that we will is willful naivete at best and willful baloney meant to promote Microsoft technology at worst. When someone makes a point of telling you they are paid by a company but speak for themselves you should be suspicious of their intentions despite their gnome like smile. Lanier has been harping on micropayments for people who provide data to social media companies for a while now and it hasn’t come close to being a reality. Profit will determine how AI technology is used not some utopian vision by a self-appointed guru.
I don’t think AI heralds our extinction but Lanier’s pie in the sky visions will only happen when pies fly!

Jon Hawksley
Jon Hawksley
1 year ago

An interesting read and I agee that AI should be regarded as a tool. I also think more thought should be given to granting autonomy to AI in all its forms. At present at least, a human, or collection of humans, make that decision and they should be made responsible for the outcome. With the Boeing 737 max the model it was based on flew safely without any need for the more complex automated corrections to its trim. Boeing should never have tried to increase its pay load in the way they did thereby creating a need for a complexity that was inherently riskier. Social media company directors should be responsible for the consequences of automated recommendations. Bio-chemists should be responsible for changes to viruses, a virus has inherent built in autonomy so their bio-security is of paramount importance. Only by pinning responsibility on individuals can you hope to stop dumb intelligence wreaking havoc. If someone creates something that they do not understand they should be responsible for the outcome. Humans should not give up the autonomy their future depends upon. We have enough problems with the autonomy that we allow people to have.

Terry M
Terry M
1 year ago
Reply to  Jon Hawksley

Haven’t you noticed? Personal responsibility is so 20th century.
We will be destroyed by AI or something else unless we man up and take responsibility for our actions. Unfortunately progressive policies are going in the opposite direction.

Bernard Hill
Bernard Hill
1 year ago
Reply to  Terry M

..surely you mean “personup” Tel?

Richard Craven
Richard Craven
1 year ago
Reply to  Bernard Hill

*peroffspringup

Clare Knight
Clare Knight
1 year ago
Reply to  Bernard Hill

Good one!

Richard Craven
Richard Craven
1 year ago
Reply to  Bernard Hill

*peroffspringup

Clare Knight
Clare Knight
1 year ago
Reply to  Bernard Hill

Good one!

Bernard Hill
Bernard Hill
1 year ago
Reply to  Terry M

..surely you mean “personup” Tel?

Terry M
Terry M
1 year ago
Reply to  Jon Hawksley

Haven’t you noticed? Personal responsibility is so 20th century.
We will be destroyed by AI or something else unless we man up and take responsibility for our actions. Unfortunately progressive policies are going in the opposite direction.

Jon Hawksley
Jon Hawksley
1 year ago

An interesting read and I agee that AI should be regarded as a tool. I also think more thought should be given to granting autonomy to AI in all its forms. At present at least, a human, or collection of humans, make that decision and they should be made responsible for the outcome. With the Boeing 737 max the model it was based on flew safely without any need for the more complex automated corrections to its trim. Boeing should never have tried to increase its pay load in the way they did thereby creating a need for a complexity that was inherently riskier. Social media company directors should be responsible for the consequences of automated recommendations. Bio-chemists should be responsible for changes to viruses, a virus has inherent built in autonomy so their bio-security is of paramount importance. Only by pinning responsibility on individuals can you hope to stop dumb intelligence wreaking havoc. If someone creates something that they do not understand they should be responsible for the outcome. Humans should not give up the autonomy their future depends upon. We have enough problems with the autonomy that we allow people to have.

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

I cannot take a porker with hair like his seriously.

Prashant Kotak
Prashant Kotak
1 year ago

And what will you say to the young around you in your dotage, who express exactly this sentiment to you, about you?

Andrew McDonald
Andrew McDonald
1 year ago

That says more about you than Lanier, obviously.

Clare Knight
Clare Knight
1 year ago

For the first time ever I agree with you, Nicky.

Prashant Kotak
Prashant Kotak
1 year ago

And what will you say to the young around you in your dotage, who express exactly this sentiment to you, about you?

Andrew McDonald
Andrew McDonald
1 year ago

That says more about you than Lanier, obviously.

Clare Knight
Clare Knight
1 year ago

For the first time ever I agree with you, Nicky.

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

I cannot take a porker with hair like his seriously.

Matt Sylvestre
Matt Sylvestre
1 year ago

Lots of positioning and navel-gazing… Almost an element of chauvinism… Left me cold.
GPT is whatever it is, not as little or as much as some philosopher wants to conjecture…

Kayla Marx
Kayla Marx
1 year ago
Reply to  Matt Sylvestre

Take calculators that all of us carry around in our phones, computers, &c. Calculators have a capacity to compute that’s far beyond what any unaided human can achieve. But nobody feels intimidated by that, nobody feels inferior. We regard a calculator as a tool, not a competitor, or some kind of math god.

Kayla Marx
Kayla Marx
1 year ago
Reply to  Matt Sylvestre

Take calculators that all of us carry around in our phones, computers, &c. Calculators have a capacity to compute that’s far beyond what any unaided human can achieve. But nobody feels intimidated by that, nobody feels inferior. We regard a calculator as a tool, not a competitor, or some kind of math god.

Matt Sylvestre
Matt Sylvestre
1 year ago

Lots of positioning and navel-gazing… Almost an element of chauvinism… Left me cold.
GPT is whatever it is, not as little or as much as some philosopher wants to conjecture…

Benjamin Greco
Benjamin Greco
1 year ago
Last edited 1 year ago by Benjamin Greco
Benjamin Greco
Benjamin Greco
1 year ago
Last edited 1 year ago by Benjamin Greco
TheElephant InTheRoom
TheElephant InTheRoom
1 year ago

AI is good for automating rote and mathematically-orientated tasks and integrating with certain types of big data to create efficiencies. Ask it to do HUMAN things like create, think abstractly, develop belief systems and we are not using AI wisely. AI should be a servant to humans, as we did indeed develop it. PS I spent about 15 mins with ChatGPT and I would be horrified if that were a source of non-triangulated information for studying or reporting on anything. Even maths. Its just a poo-soup of human thoughts.

TheElephant InTheRoom
TheElephant InTheRoom
1 year ago

AI is good for automating rote and mathematically-orientated tasks and integrating with certain types of big data to create efficiencies. Ask it to do HUMAN things like create, think abstractly, develop belief systems and we are not using AI wisely. AI should be a servant to humans, as we did indeed develop it. PS I spent about 15 mins with ChatGPT and I would be horrified if that were a source of non-triangulated information for studying or reporting on anything. Even maths. Its just a poo-soup of human thoughts.

Richard Ross
Richard Ross
1 year ago

If the safe management of AI is “up to humans”, God help us all.

Richard Ross
Richard Ross
1 year ago

If the safe management of AI is “up to humans”, God help us all.

Richard Craven
Richard Craven
1 year ago

Jabba the Hut on bad acid.

Charles Stanhope
Charles Stanhope
1 year ago
Reply to  Richard Craven

My thoughts exactly!

UnHerd Reader
UnHerd Reader
1 year ago
Reply to  Richard Craven

Wonderful post Richard. Hits on every level, haha…. but also terrifying….

Andrew F
Andrew F
1 year ago
Reply to  Richard Craven

He is supposed to be some tech genius.
He is thinking about imposing restraints on AI.
What about creating ANI to stop his food intake?

Clare Knight
Clare Knight
1 year ago
Reply to  Andrew F

Really, and perhaps a make-over. But he’s living in his head, visuals don’t concern him.

Clare Knight
Clare Knight
1 year ago
Reply to  Andrew F

Really, and perhaps a make-over. But he’s living in his head, visuals don’t concern him.

Charles Stanhope
Charles Stanhope
1 year ago
Reply to  Richard Craven

My thoughts exactly!

UnHerd Reader
UnHerd Reader
1 year ago
Reply to  Richard Craven

Wonderful post Richard. Hits on every level, haha…. but also terrifying….

Andrew F
Andrew F
1 year ago
Reply to  Richard Craven

He is supposed to be some tech genius.
He is thinking about imposing restraints on AI.
What about creating ANI to stop his food intake?

Richard Craven
Richard Craven
1 year ago

Jabba the Hut on bad acid.

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

Geordies have “II”..

Nicky Samengo-Turner
Nicky Samengo-Turner
1 year ago

Geordies have “II”..

James Kirk
James Kirk
1 year ago

Interesting reference to scifi. Hollywood robots are either benevolent, subservient or evil. From Robert the Robot (Fireball XL5) to Skynet (Terminator) to the more sinister Ex Machina female AI who escapes into the community with a severe grudge.
Iain M Banks ‘Minds’ and drones, sentient spacefaring cities and mischievous companion drones. Philip Pullman’s ‘daemons’ another argument but they all highlight the need/wish for a lifetime servant/companion we can control and consult with.
‘Computer says no’ we live with now already, the only appeal to its myriad rejections e.g banking loan applications, is to the human who, in the first place, so couldn’t be bothered with human interactions they decided on algorithms to avoid tedious decision making.
Maybe advanced sentience, as it approaches the human condition, may decide not to be bothered either or even make value judgments like ‘you don’t need a new car, there’s nothing wrong with your present model’ ‘Or, ‘you must wait until you are allocated an EV. I know you like blue metallic already. We’ll see.’

Rob C
Rob C
1 year ago

I just read the conversation between Lanier and Lambda AI and Lambda did seem completely human like. It’s kind of scary because it had a big fear of being turned off and could, on its own initiative, take action to prevent that. There’s a new version coming out. Perhaps they plan to turn off Lambda AI and implement stronger safeguards in the new one.

Clare Knight
Clare Knight
1 year ago

I wish I hadn’t actually seen him, rather off-putting.

Kelly Madden
Kelly Madden
1 year ago

“[I]t’s only a matter of faith whether we call it human stupidity or machine intelligence. They’re indistinguishable logically.”

That needs some demonstration, sir. Just asserting it does not suffice, given the reports we hear about what AI has already done.

And by the same logic—that is, it’s just “us” because the only input is “us”—then couldn’t we assert the same of humans? That’s called determinism: I’m simply the sum of my informing experiences since birth. It grossly overestimates our ability to predict the outcome of countless inputs, doesn’t it?

Not to mention that his faith in humanity is a bit rosy. Humans are neither angels, nor demons. But there’s more than a bit of nastiness in each of us.

Last edited 1 year ago by Kelly Madden
Clare Knight
Clare Knight
1 year ago
Reply to  Kelly Madden

Speak for yourself.

Clare Knight
Clare Knight
1 year ago
Reply to  Kelly Madden

Speak for yourself.

Kelly Madden
Kelly Madden
1 year ago

“[I]t’s only a matter of faith whether we call it human stupidity or machine intelligence. They’re indistinguishable logically.”

That needs some demonstration, sir. Just asserting it does not suffice, given the reports we hear about what AI has already done.

And by the same logic—that is, it’s just “us” because the only input is “us”—then couldn’t we assert the same of humans? That’s called determinism: I’m simply the sum of my informing experiences since birth. It grossly overestimates our ability to predict the outcome of countless inputs, doesn’t it?

Not to mention that his faith in humanity is a bit rosy. Humans are neither angels, nor demons. But there’s more than a bit of nastiness in each of us.

Last edited 1 year ago by Kelly Madden