May 21, 2018   3 mins

The folk who run our big tech companies are an odd lot.

For one thing, they’re super-wealthy and, as F Scott Fitzgerald warned us, the very rich “are different from you and me”. But while we might not know much about their lives, they know an awful about ours – thanks to their unprecedented control over the exchange of information. All-knowing and, hence, all-powerful, it’s been a long time since a ruling elite has appeared quite so… well, godlike. To add to their mystique it appears that many of them are followers of a weird religion.

Arguably all religions are weird. My own (Christianity, Roman Catholic flavour) is magnificently weird. This is as it should be. After all, what would be the point of a metaphysical reality that can fit with ease into our puny human minds?

Those who condemn religion as irrational are missing the point, which is that there are some questions beyond the capacity of human reason to answer and that require us to engage other faculties. However, there exists, in Silicon Valley and elsewhere, a ‘secular religion’ that proposes a very different solution to the limitations of human reason – which is that we should create artificial minds greater than our own.

In the Economist, ‘T.C.’ writes about the ‘god’ of this religion – a concept known as the ‘Singularity’:

“The term has different definitions depending on whom you ask, and it often overlaps with ideas like transhumanism. But the broad idea is that the rate of technological progress is accelerating exponentially, and will continue to do so, to the point where it escapes all efforts at control…”

It is hoped, though not necessarily assumed, that the resulting digital supreme being will provide us with the means to transcend our human limitations:

“Optimists… conjure up an age of limitless material abundance and infinite leisure, with genetically modified humans bound together by brain implants into a solar-system spanning hivemind, or perhaps uploading their minds into a silicon utopia.”

It has to be stressed that there are a lot of very senior people in the tech world who truly think this is going to happen. The Economist article mentions Masayoshi Son, not a denizen of Silicon Valley, but of Japan, where he is the founder, chairman and CEO of Softbank:

“Robots will have IQs of 10,000 within the next 30 years, he says, and there will be as many of them on Earth as there are humans. Along with a group of science-fiction authors, futurists and computer programmers, Mr Son is an exponent of the idea of the Singularity.”

That’s quite a claim because 30 years is not a lot of time. If you compare today’s world with the world in 1988, then, yes, there have been changes; but not, for those of us in the West, transformative changes. Our lives are basically the same. Thanks to Moore’s Law – which underpins the expectation that the computer processing power available to us is increasing at an exponential rate (because it has done so far) – we have the internet and smart phones. Nevertheless our day-to-day experience of the real world is much as it was thirty years ago.

Those who think the Singularity will happen by 2048 must have faith in the continuation of Moore’s Law. They also need to believe that a further 30 years of it will make an immensely bigger difference than the previous 30 years.

Hence the core dogma of Singularitarianism, which is that the continuation of exponential gains in processing power (plus related advances) will produce what Cross calls a “tipping point” i.e. the development of true artificial intelligence. Once that happens, AI will be harnessed to recursively improve upon itself in an upward spiral of ever-increasing digital brilliance.

However, I think the notion of robots with IQs of 10,000 rather gives the game away. A human being can solve an IQ puzzle that he or she has never seen before. A computer has to be told how to do it by its programmers – either in the form of a ready-made algorithm or by feeding it large amounts of labelled data. If presented with a different puzzle, the computer is completely clueless again.

To be intelligent in the way that people are intelligent requires creativity and consciousness. For all the exciting advances made in machine learning and ‘weak’ artificial intelligence, the AI systems that actually exist are not in the slightest bit creative or conscious. The dream of somehow transmuting ‘weak AI’ into ‘strong AI’ owes more to alchemy than algorithms.

That’s why all talk of exponential increases in processing power is irrelevant. In terms of the progress we’ve made towards the ‘1’ of the Singularity, our current position is precisely zero – despite decades of Moore’s Law in action.

Even if we double that progress every year for the next 30, 300 or 3,000 years, it will still be zero.


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_