May 8, 2023   16 mins

Jaron Lanier uniquely straddles the worlds of computer science and philosophy. Born in 1960, he was an academic child prodigy. He enrolled at New Mexico State University aged 13, joined Atari at 23, after which he became a pioneer in the field of virtual reality, developing the first VR headsets and gloves in the Eighties. He has worked at Microsoft since 2006, but has also developed a parallel career as a public intellectual. In recent years, he has emerged as a prominent critic of digital culture and the way social media algorithms aggravate the crudest of human tendencies — his last book was titled Ten Arguments for Deleting Your Social Media Accounts Right Now.

This week, Lanier joined Florence Read to discuss AI, the possibility of machine consciousness, and why he still has faith in humanity. Below is an edited transcript:

Florence Read: You recently wrote an essay for the New Yorker with the seemingly phlegmatic title, “There is no AI”. Does that mean you don’t think recent developments are a problem?

Jaron Lanier: I actually have publicly stated that I think we could use the new technologies (as well as other technologies) to destroy ourselves. My difference with my colleagues is that I think the way we characterise the technology can have an influence on our options and abilities to handle it. And I think treating AI as this new, alien intelligence reduces our choices and has a way of paralysing us. An alternate take is to see it as a new form of social collaboration, where it’s just made of us. It’s a giant mash-up of human expression, opens up channels for addressing issues and makes us more sane and makes us more competent. So I make a pragmatic argument to not think of the new technologies as alien intelligences, but instead as human social collaborations.

FR: But as someone who works at Microsoft, at the heart of the AI revolution, is it not easier for you take the more pragmatic view over the hysterical?

JL: I have a really unusual role in the tech world. It shouldn’t be unusual; I think it should be more common. Essentially, I am speaking my mind honestly, even though I’m on the inside of the castle instead of on the outside throwing stones at the castle. In my opinion, both positions should be well-manned. I don’t think there’s any perfect way to handle anything. One is always somewhat compromised. Microsoft and I have come to an accord, where I have, what you might call, academic freedom. I speak my mind, I speak things as I see them but I also don’t speak for the company. And we make that distinction. It allows me to maintain my public intellectual life but also work inside.

I don’t necessarily find agreement with everybody I work with, nor do I find absolute disagreement. For instance, Sam Altman from OpenAI really liked my New Yorker piece. I don’t think he agrees with it entirely but he said he agrees with it mostly. That’s great. I think having some degree of openness within the big tech companies is a healthy thing. Within Microsoft, there are now a few other figures who at least somewhat speak their minds. I’m hoping that this demonstration that a tech company can be successful while allowing essentially free speech within its research community, can be a precedent that other tech companies follow. I’d like to see Google and Meta and Apple do a little bit more of that.

FR: What’s odd about the AI discussion is that so many of the people working on it — including Sam Altman — are also the ones sharing their deep existential fears about what AI will do to humanity.

JL: I have tried to understand that myself for decades. I think part of it is, we simultaneously live in a science-fiction universe, where we’re living out the science fiction we grew up with. If you grew up on the Terminator movies, and the Matrix movies and Commander Data from Star Trek, naturally what you want to do is realise this idea of AI. It just seems like your destiny. But then another part of you is thinking, “But in most of those stories, with Commander Data being the exception, this was horrible for mankind.” It feels responsible to acknowledge that it could be horrible for mankind, and yet at the same time, you keep on doing it. It’s weird, and I believe the approach that I’ve proposed of not thinking of what we’re doing as alien intelligences but rather a social collaboration, is the way through that problem, because it’s a way of framing it that’s equally valid, but actionable. But within the tech world, giving up those childhood science-fiction fantasies that we grew up with is really hard for people.

FR: Of course, we used to call the internet a “Wild West”, which played into this mythology — as though there’s a cowboy in every one of these computer scientists who wants to find this new frontier.

JL: I think that’s true too. I grew up in rural New Mexico in the Sixties, when it was still not that economically developed. So I actually got to experience a little bit of the tail end of the Wild West. And I can assure you that it was miserable, and it’s not something anybody would want, but the version of it in the movies is very appealing. And it does bring up a sort of a strange gender-identity connection. Recently, I was on a morning TV talk show in the US, and one of the hosts was a woman who said to me, “It just seems like there’s a lot of male fantasy in the AI world. Shouldn’t there be more women AI leaders?” And I said, “Well, there are some spectacular women AI leaders, and actually, there does tend to be some sort of a difference where the women seem to be a little more humanistic.” In the YouTube version of that, they cut out the whole exchange about women, and I called and asked about it and they said, “It just seemed like a niche question, so we cut it out.” And it’s not, it’s a very central question.

FR: I do have a kind of sense that there is something Promethean about it, that for many men, this is the first time they’ve been able to create life from nothing.

JL: When I was a kid, I always used to say AI is really just womb envy. And having had a child and seeing what it’s actually like for a woman to bear child, I no longer have womb envy. I now appreciate it’s actually a rather difficult process for the mother and I didn’t know that when I was a young man. I will say that it’s not just men, but it tends to be men who haven’t had kids yet who might have that desire to create life in the computer.

FR: You distinguish between two types of AI, between this “alien entity” to which many of your colleagues seem to attribute the spark of life, and your version, which is, in fact, just a network of connections between humans.

JL: The term AI is very wiggly, and gets applied to all kinds of things. But usually these days when we talk about AI, we’re talking about these large AI models like the GPT programmes. What they are is giant mash-ups of human creations. If you ask one of these programmes to create you a new image — like, I’d like to see London as if it were a cross between London and Gurwat — it can probably synthesise that. But the way it does so is by using the classifiers that it uses to identify the images that match the components of your request, and mashing them up. Managing the whole scale so it can happen quickly is not so simple, but the basic idea is pretty simple.

Now, I happen to think that’s a great capability with a lot of uses. I love the idea of computers just getting more flexible. It creates the possibility of saying, “Can you reconfigure this computer experience to work for somebody who’s colourblind?” instead of demanding that people conform to computer design. There’s a potential in this flexibility to really improve computation on many levels and make it much better for people. But, if you want to, you can perceive it as a new intelligence. And, to me, if you perceive it as a new intelligence, what you’re really doing is shutting off yourself in order to worship the code, which I think is exactly the wrong thing. It makes you less able to make good decisions.

You’ve probably heard of the Turing test, which was one of the original thought-experiments about artificial intelligence. There’s this idea that if a human judge can’t distinguish whether something came from a person or computer, then we should treat the computer as having equal rights. And the problem with that is that it’s also possible that the judge became stupid. There’s no guarantee that it wasn’t the judge who changed rather than the computer. The problem with treating the output of GPT as if it’s an alien intelligence, which many people enjoy doing, is that you can’t tell whether the humans are letting go of their own standards and becoming stupid to make the machine seem smart.

FR: So we haven’t reached computational consciousness, a computer with sentience?

JL: The sentience of others is always a matter of faith. There’s no way to be certain about whether someone else has interior experience in the way that you do. I presume that you do, but I can’t know. There is a mystical or almost supernatural element in which we have internal experience — or at least I do, but I can’t make you believe I do. You have to just believe on your own that I do. That faith is a very precious thing and there’s no absolute argument that you should or shouldn’t believe that another person has interior experience, or sentience or consciousness, or that a machine does. Faith is not fundamentally rational, but there is a pragmatic argument, as I keep on repeating, to placing your faith in other people instead of machines. If you care about people at all, if you want people to survive, you have to place your faith in the sentience of them instead of in machines as a pragmatic matter, not as a matter of absolute truth.

FR: Is the only distinction between human and machine sentience, then, a faith in the power of the human soul versus the fact that that computer is just amalgamating information?

JL: It’s a matter of faith that has pragmatic implications. Just to say something is a matter of faith doesn’t mean that the choice of faith is entirely arbitrary, because it can be pragmatic as well. So, if not believing in people increases the chance that people will be harmed, I think the same is the case with this technology. Not believing that machines are the same as people increases the chance that people will be harmed. Cumulatively, we should believe in people over computers, but that’s not an absolute argument based on logic or empiricism, which I don’t think is available to us. There’s a bit of a skyhook thing here, like the problem of “why should you stay alive instead of committing suicide?” It’s applied to the whole species: “Why should we continue this human project; why does it matter?”

I’ve come to something that’s a little bit like the argument attributed to Pascal — you might as well believe in God, just in case it’s real and there’s heaven and hell. I don’t buy that particular argument; I’m not concerned about heaven or hell. However, I do think that the continuation of us in this timeline, in this world, and this physicality, is something I’d like to commit to. I think we might be something special. And so in that way, I’d like to apply faith to us and give us a chance, and that does involve the demoting of computers. But when we demote computers, we can use them better. Demoting AI allows us to not mystify, and that allows us paths to explaining it, to controlling it, to understanding it, to using it as a scientific exploration of what language is. There’s so many reasons to demote it that are practical, that the faith in it as a mystical being just actually seems kind of stupid and wasteful and pathetic to me.

FR: But can we demote something that has potentially more power than us already? Most of us are already subordinated to computers in our everyday lives.

JL: People are capable of being self-destructive, idiotic, wasteful and ridiculous, with or without computers. However, we can do it a little more efficiently with computers because we can do anything a little more efficiently with computers. I’ve been very publicly concerned about the dehumanising elements of social media algorithms. The algorithms on social media that have caused elevated outbreaks of things that always existed in humanity, but there’s just a little more: vanity, paranoia, irritability. And that increment is enough to change politics, to change mental health, especially in impoverished circumstances around the world. It’s just made the world worse incrementally. The algorithms on social media are really dumbass-simple — there’s really not a lot there. And so I think your framing of it as more powerful than us is incorrect. I think it’s really just dumb stuff. It’s up to us to decide how it fits into human society.

The capacity for human stupidity is great, and, as I keep on saying, it’s only a matter of faith whether we call it human stupidity or machine intelligence. They’re indistinguishable logically. So I think the threat is real. I’m not anti-doomist. I just ask us to consider: what is the way of thinking that improves our abilities and improves our thinking — that gives us more options, gives us more clarity? And it involves demoting the computer.

There’s a lot of work to do technically. We can create explanations for what so-called “machine intelligence” is doing by tracing it back to its human origins. There have been a number of very famous instances of chatbots getting really weird with people. But the form of explanation should be to say, “Actually, the bot was, at that point, parodying something from a soap opera, or from some fanfiction.” That’s what’s going on. And in my opinion, there should be an economy in the future where, if there’s really valuable output from an AI, the people whose contributions were particularly important should actually get paid. I believe there’s a new extension to society that’s very creative and interesting, rather than this dismal prospect of everybody being put out of work. Transparency in mash-up technology can only come from revealing the people whose expressions were mashed-up. But if policies are based on the idea that we now have this new “supernatural artificial entity”, there’s no sensible way to resolve that.

FR: You didn’t sign the open letter demanding a hiatus in accelerating AI development, which was signed by Elon Musk and Sam Altman. Was that not appealing to you as an idea?

JL: My reason for not signing it is that it fundamentally still mystified the technology. It took the position that it’s a new alien entity. When you think it’s an alien entity, there’s no way to know how to help. If you have an alien entity, what regulation is good? How do you define harm? As much as the GPT programmes impress people, they don’t actually represent ideas. We don’t know how to define these things, all we can do is mash things up so that they conform with classifiers.

FR: So they can’t do the philosophical work of thinking?

JL: They could mash-up philosophers in ways that might be interesting. If you say, “Write an essay as if Descartes and Derrida collaborated”, something might come out that’s provocative or interesting. But there’s no actual representation inside there. And getting provocative or interesting mash-ups is useful, but you can’t set policy by it because there’s not actually any meaning. There’s no fundamental representation inside these things and we just have to accept that as reality. We don’t know what meaning is and we can’t represent meaning.

FR: Your argument relies on the idea that if we define this technology differently, then we will have more power over it, or at least we’ll have more understanding of it. Are we not just self-comforting here with a rhetoric about it being a human technology rather than something we can’t control?

JL: I don’t think that’s the case. It’s proposing a more concrete and clarified path of action. It’s very demanding of people and it’s not comforting at all. It demands that everybody involved on a technical or regulatory level do much more than they have. I suspect many people would prefer the mystical version because it actually lets them off the hook. The mystical version just lets you sit there and apprehend, and express awe at our own inventions. What I’m talking about demands action. It’s not comforting and it shouldn’t be.

FR: Do you think humans need to take more accountability for their part in developing a potentially malign form of AI? If it does go off the rails, wouldn’t it be because we’ve set it up to do so?

JL: One comparison is the disasters with the Boeing 737 Max. The flight correction module in it was the source of two terrible air disasters in which hundreds of people died. But what actually happened involved the way they sold it, the way they withheld information about it (depending on how much you paid them), the way they trained people for it, the way the documentation was created. It’s the surrounding stuff that created the disaster, not the core capability, which probably has been useful in general. In the same way, with this large-model AI, it’s not the thing itself, it’s the surrounding material that determines whether it’s malignant or not.

When you deploy it, under the assumption that it’s an alien new intelligence — that it’s a new entity with its own point of view that should be treated as a creature instead of a tool — you greatly increase the chances of a scenario similar to the one that that befell passengers on the Boeing planes. I think that’s a real possibility. The malignancy is in the surrounding material, not in the core technology, and that’s extremely important to understand. I don’t think anybody has claimed that the flight path correction module shouldn’t have existed. I think what people are saying is that the pilots should have been well-informed, well-trained, and the ability to control it should have always been included, not only for those who paid more. And if you have chatbots, and you tell people, “This is an intelligent companion, you should be able to date it, you should be able to trust it”, then the chances of something really bad happening increase.

FR: Isn’t the main worry then that this sort of technology might fall into the hands of someone who has malign intent against a group or country. I’m thinking particularly about the situation in Ukraine.

JL: Russia has one of the worst records on misusing the internet and algorithms. It’s documented that Russia created enormous numbers of fake accounts, of fake bots, in order to sow divisions within the US. And of course, it’s attempting those things in Ukraine. I worry a little bit more about China, because Russia doesn’t quite have the resources to pull off very large-model projects right now. It’s not that easy to do — you need huge computational resources. So I worry a little bit about China using, to be very blunt, data from TikTok on the morning of a Taiwan invasion or something like that. That’s imaginable. I’ve talked to a lot of people in the Chinese world, and I think almost all are actually much more conscientious and better-intentioned than we might imagine, but there’s always somebody in any country in any situation. I do worry about it, and the antidote to it is universal clarity, context, transparency, which can only come about by revealing people, since revealing ideas is impossible because we don’t know what an idea is.

FR: We’ve established though that we already live with artificial intelligence. How has that already changed us?

JL: Our principal encounter with algorithms so far has been in the construction of the feeds we receive in our apps. It’s in whether we get credit or not and other things like that — whether we get admitted to university or not, or whether we’re sent to prison, depending on what country we’re talking about. Algorithms have transformed us. I would hope that the criticisms of them that I and many others — Tristan Harris, Shoshana Zuboff — have put forward have illuminated and clarified the issues with algorithms in the previous generation. But what could happen with the new AI is a worse version of all of that. Given how bad that was, I don’t think the doomerists are entirely wrong. I think we could confuse ourselves into extinction with our own code. But, once again, in order for us to achieve that level of stupidity, we have to believe overly in the intelligence of the software, and I think we have a choice.

FR: You’re a composer as well as a computer scientist. Do you think that there is going to be a shift in the way in which we prioritise organic or manmade art?

JL: We are entering a world of what I call “data dignity”. A musician might provide music directly, or might provide antecedent music that’s mashed-up within an algorithm, but that musician is still known and credited. And we’ve seen that already for decades now — somebody might provide the beats, somebody else might provide samples, etc. There’s already this sense of construction and mash-up, especially in hip-hop, but also just in pop music lately. That has not destroyed musicians, not as long as it’s acknowledged and transparent. I think, as with Boeing, it’s the surrounding material. If we choose to use mash-up algorithms to hide the people from whom the antecedent stuff came from, then we do damage. But the thing doing the damage is hiding ourselves, not the algorithm itself, which is actually just a simple dumb thing. I think there are a lot of good things about an algorithmic mash-up culture in the future. Every new instance of automation, instead of putting people out of work, could be thought of as the platform for a new creative community.

FR: Won’t that dull our eyes to the beauty of real art and culture?

JL: What I see in culture is, as long as people understand what’s going on, they find their way. Synthesisers haven’t killed violins. There was a fear that they would, and as long as people know the difference, as long as there’s honesty and transparency about what’s going on, we can go through seasons of things being a little more artificial and then less so. That becomes a cultural dynamic and I trust people to handle that well.

FR: I might sound a bit like someone booing at Bob Dylan going electric, but, if you take Spotify, it’s almost totally wiped out independent music. There have been major technological advances in music that have obliterated creativity at those lower, more maverick levels of the industry.

JL: You’re absolutely correct about Spotify. In fact, at the dawn of the file-copying era, I objected very strenuously to this idea. There was a cultural movement about open source and open culture, which was stealthily funded by Google and other tech companies, and the Pirate Parties in Europe. People thought everything should be a mash up and we didn’t need to know who the musician was and they didn’t need to have bargaining power in a financial transaction. That was a gigantic wrong turn, and it was a wrong turn that we can’t afford to repeat with AI because it becomes amplified so much that it could really destroy technology. I completely agree with you about Spotify but, once again, the availability of music to move through the internet was not the problem. It’s the surrounding material. What really screwed over musicians was not the core capability, but this idea that you build a business model on demoting the musician, demoting the person, and instead elevating the hub or the platform. And so we can’t afford to keep on doing that. I think that is the road that leads to our potential extinction through insanity.

FR: It sounds like the answer to a lot of these problems comes down to human greed?

JL: I think humans are definitely responsible. Greed is one aspect of it, but it’s not all of it. I don’t necessarily understand all human failings within myself or anybody else, but I do feel we can articulate ways to approach this that are more practical, more actionable and more hopeful. That has to be our first duty. I think this question of diagnosing each other and saying, “This person has womb envy”, or whatever has some utility, but not a lot, and can inspire reactions that aren’t helpful. So I don’t want to emphasise that too much. I want to emphasise an approach, which we can call “data dignity”, and which opens options for us and makes things clearer.

FR: What is the best case scenario if we follow that route?

JL: What I like about the new algorithms is that they help us collaborate better. You could have a new and more flexible kind of a computer, where you can ask it to change the way you present things to match your mood or your cognition under your own control, so that you’re less subservient to the computer. But another thing you can do is you can say, “I have written one essay, my friend’s written another essay, they’re sort of different. Can you mash them up 12 different ways so we can read the mash-ups?” And this is not based on ideas, it’s based on the dumb math of combining words as they appeared, in order, in context. But you might be able to learn new options for consilience between different points of view that way, which could be extraordinary. Many people have been looking at the humanistic AI world, the human-centred AI world, and asking, “Could we actually use this to help us understand potential for cooperation and policy that we might not see?”

FR: So, oddly, it might break us out of our tribes and offer some human connection?

JL: It’s like if a therapist says, “Try using different words and see if that might change how you think about something.” It’s not directly addressing our thoughts, but on the surface level it actually can help us. But it’s ultimately up to us, and there’s no guarantee it’ll help, but I believe it will in many cases. It can help us improve our research, it can help us improve a lot of our practices, and, as long as we acknowledge the people whose ideas are being matched up by the programmes, it can help us even broaden participation in the economy, instead of throwing people out of work as so often foretold. I think we can use this stuff to our advantage, and it’s worth it to try. If we try to use it well, the most awful ideas about it turning into the Matrix or Terminator, become vanishingly unlikely, as long as we treat it as a human project instead of an alien intelligence.


is UnHerd’s Senior Producer and Presenter for UnHerd TV.