X Close

Silicon Valley’s ‘suicide pill’ for mankind

Credit: Carl Court/Getty Images

December 31, 2018   5 mins

Last year, start-ups in America attracted more than $60 billion in venture capital funding. Of this, $12 billion alone was for artificial intelligence. Yet what the men and women who want to transform our society believe in has largely escaped our attention. It’s time we were told.

Silicon Valley is famous for its technological innovation. It’s less well known for being at the ideological cutting edge.

Two new technology-based ideologies that have been born in the Valley are transhumanism and posthumanism. It is hard to understand one without the other, and the boundaries between these two, still rather rough, sets of beliefs are very blurred.

Transhumanism has been described by Francis Fukuyama as one of the greatest threats to the idea of human equality and says that transhumanists are “just about the last group I’d like to see live forever”. When I wrote about transhumanism for Wired back in 2014, many people thought I was a lunatic. Then, in the 2016 presidential election Zoltan Istvan ran against Donald Trump as the Transhumanist Party’s candidate, and this year, Mark O’Connell’s book To Be a Machine won the Wellcome Book Prize.

Transhumanism has become one of the de facto ideologies of the Silicon Valley establishment in that it justifies the Valley’s culture of ‘move fast, break things and make as much money as possible’

It has been shaped by science-fiction, but the origins of transhumanism can be found as far back as the 1900s, or even earlier to the quest for immortality in the Epic of Gilgamesh, and the search for the Fountain of Youth. The first transhumanists formally met at the University of California, Los Angeles, in the early 1980s and quickly became the centre of transhumanist thinking. It now is a global movement with Italian Giuseppe Vatinno the first elected transhumanist member of parliament.

Today, transhumanists count among their member’s influential figures such as futurist Raymond Kurzweil, Google’s Director of Engineering, Elon Musk, founder of Tesla and Space X, and Peter Thiel, founder of PayPal and the venture capitalist most people love to hate. Oxford University’s own Professor Nick Bostrom is co-cofounder of the World Transhumanist Association and author of the New York Times best seller Superintelligence: Paths, Dangers, Strategies, which has been recommended by the likes of Bill Gates.

If you haven’t yet heard, transhumanism is a rather optimistic set of beliefs – a movement to liberate humanity – clustered around the core idea that technology will take us beyond the physical and intellectual limitations of being human. Technologies like nanotechnology, synthetic biology, robotics, AI and digital brain emulation will transform what it means to be human. Transhumanism of a sort is implied in the soft sell for self-driving cars, virtual reality and any kind of AI.

Transhumanism, critics argue, has become one of the de facto ideologies of the Silicon Valley establishment in that it justifies the Valley’s culture of ‘move fast, break things and make as much money as possible’ – because ‘hey, what we are doing is liberating humanity’.

What is often left out of this explainer by those who want transhumanism to be taken seriously are the ‘wackier’ elements. The belief that this transformation will occur through the actual physical fusion of technology and humanity by body modification and enhancements. Or the belief that the exponential growth of technology is taking us on a journey far beyond our understanding of what it is to be human today, to a point when, in a literal sense, we become posthuman.

Our posthuman self could be an immortal digital entity able to download its consciousness into a synthetic body of its choice on Earth, or a robot exploring the moons of Jupiter. It could mean that we alter our very biology to enhance our bodies or become a new species of posthuman. Other researchers have wondered if there is the need for an international treaty to save the endangered human.

Transhumanists call the almost mystical moment after which this fusion is possible “The Singularity“.

Then there is the belief in the inevitability of an AI takeover. This is a set of beliefs often confusingly called posthumanism. Rather than thinking about beyond humanity, this kind of posthumanism is more focused on the elimination of humanity. It is a darker, more extreme and pessimistic alternative to transhumanism. It shares many ideas with transhumanism such as the exponential growth of technology and a belief in The Singularity. But it removes the human agency behind technological change and believes in the inevitability of the creation of a superintelligent AI that replaces us in a very deterministic way.

According to this way of thinking, technology is evolving at an exponential rate driven by the constant need of capitalism to expand and it is inevitable that at some point along this curve the technological singularity will occur. This is the moment when humans create an artificial intelligence that surpasses the intellectual abilities of men and women, even a genius like Stephen Hawking. It is the last machine that humans will ever make.

With this level of intellectual firepower, the machine acquires the ability not only to reproduce itself but also to improve itself. The resulting “Intelligence Explosion” leads to a runaway cycle of self-improving AI that results in a powerful, superintelligent computer that surpasses all human intelligence. Rather than physically fusing with this technology, the human era comes to an end in the Darwinian nightmare of the human race’s replacement by a superior intelligence that we ourselves have created.

At best we should just give a fatalistic shrug of our collective shoulders to our inevitable self-extermination. At worst, it is a suicide pill, since it is our evolutionary duty, believers argue, to create the AI that will replace us. Some posthumanists would even go so far as to argue that it would be a cosmic tragedy if we stopped this from happening.

Cosmists, as their name suggests, are in the ‘suicide pill’ camp. Computer scientist Hugo de Garis argues that humanity must build these “godlike super creatures”, which they call artilects, even if it risks the destruction of the human species. The assumption behind this is that the life of ordinary humans – whom they call Terrans – is worth less than that of artilects.

Cosmists like de Garis have started to argue that the drive to create these new godlike creatures will lead to the first “gigadeath war” – a war that kills billions of people. They believe that the war will start when ordinary humans try to prevent the creation of these superintelligent machines, and that the only way to survive alongside these new creatures is to become cyborgs.

These crazy-sounding beliefs are not, of course, universally accepted by the technology community, and there are different versions of each. There are also transhumanists like Elon Musk and Nick Bostrom who are aware of the risks of such a process of technological transformation. Others argue that The Singularity is already happening. What matters is that there are transhumanists and posthumanists in positions to decide where investment goes in the Valley and elsewhere.

Biohackers who try to alter their DNA at home or upgrade their own body with a neural interface may make the headline. But Peter Thiel has invested millions into biotechnology start-ups in search of a way to cheat death. Then there is Neuralink, an American neurotechnology company founded by Elon Musk and eight others. It is reported to be developing implantable brain-computer cyborg-like interfaces that we see in sci-fi movies. It may even help humanity stay in control of AI.

‘Mind uploading’, otherwise known as whole brain emulation, has attracted millions of dollars of investment from the billionaires of Silicon Valley and beyond. A leading venture capitalist told me that he is not worried about the AI research carried out in public in universities so much as the research that is going on in unregulated private laboratories.

In the end, we need to know what the researchers believe because to a posthuman the things that matter to us today, such as the privacy of our data, the health of our democracy and the survival of our local bookshop can too easily be seen as an outdated – as all too ‘human’ concepts. The question is, do you?

Mark Piesing writes about technology, culture and the intersection between the two.


Join the discussion

Join like minded readers that support our journalism by becoming a paid subscriber

To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Notify of

Inline Feedbacks
View all comments