X Close

The new paganism of the digital age Erik Davis forecast our era of mystic ecstasy

The Matrix – a Gnostic parable? (Credit: The Matrix/Warner Bros.)

The Matrix – a Gnostic parable? (Credit: The Matrix/Warner Bros.)


August 3, 2023   5 mins

Is it weird where you are? It so often seems that we’re now living in the astounding science-fiction future of our dreams. Yet although it has turned out dystopian in ways we hadn’t quite predicted, there is also a sense that we’re hurtling together through an age of miracle and revelation. The routine magic of our connected, device-dotted world permits us to live in something like a state of perpetual ecstasy, the intuitive fluidity of streams, group chats and limitless information instilling the sense that human beings live now as a race of unleashed demi-gods, jacked into a dreamworld that is at once paradise and hell.

If the 20th century was atheistic, religiosity is now everywhere — I hardly know anyone who lives as a pure-blood rationalist, nor do I encounter many who dwell entirely within one faith or metaphysical tradition. Astrology and occultism flourish in mainstream daylight, while a revived interest in psychedelic experience and synthetic drugs has opened up gnostic wormholes amid the high-res sound-systems of nightclubs that seem more than ever like techno-pagan temples.

Attempting to understand this paradoxical synthesis led me to TechGnosis: Myth, Magic, and Mysticism in the Age of Information by Erik Davis, a deeply Californian writer who was born in 1967 but is very much a child of the Nineties. His outlook was shaped as a cultural journalist in that decade he recalls fondly for its “ambient sense of arcane possibility, cultural mutation, and delirious threat that, though it may have only reflected my youth, seemed to presage more epochal changes to come”. He describes TechGnosis as “a secret history of the mystical impulses that continue to spark and sustain the Western world’s obsession with technology”, and insists that “religious questions, spiritual experiences, and occult possibilities remain wedded to our now unquestionably science-fiction reality”.

Starting from the maxim that “magic is technology’s unconscious”, Davis explores the myriad ways in which an ostensibly rationalist-materialist-atheist civilisation invests its new machines with ancient animism and archetypal dreams. Think of the recent hype around AI; how ready we are to project sentience and malign — Chtulian! — will onto a technology that, considered rationally, can never possess such qualities. He also considers the ways in which “ecstatic technologies” such as psychedelic drugs, meditation and shamanism now influence and modify questions of the soul, while asking if religious experience, defined by Carl Jung as that in which “man comes face to face with a psychically overwhelming Other”, may itself be evolving in tandem with technocultural mutation.

The first edition of TechGnosis was published in 1998 — the same year, it’s always startling to recall, in which Google was founded — a prelapsarian time of heady economic growth and triumph-of-liberal-democracy optimism. It’s dizzying to think how much has changed since then, the spree of relentless cultural, societal and technological upheaval we’ve undergone. Yet Davis’s first book has remained relevant (helped by multiple revised editions with updated material). In part this is because it was never fully taken in by the Nineties’ now painfully discredited techno-utopianism, nor did it paint an especially rosy picture of our tech-civilisational future (yes, the one we’re now in). Davis quotes Marshall McLuhan, writing in 1962 about how the Global Village might turn out to be a more uncomfortable place than we anticipate: “As our senses have gone outside us, Big Brother goes inside…  we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed coexistence.”

Is there a better description of the online mobs, omni-paranoia, mass derangement, moral outrage, scapegoating, and polyphonic extremism that have defined the years since they put devices in our elegantly evolved hands? This sceptical edge may make Davis an appealing writer to those who find other psychedelically informed theorists too far out, too loose and easy with rationalism and the Enlightenment tradition. He is attractively open to gnostic, mystical, religious and, yes, psychedelic experience — a “heady seeker of sorts”, as he describes his younger self — yet he is also cool-headed and rational, even cynical when it comes to the exploitativeness of technocapitalism. Now in his late fifties and a compelling guest on podcasts and YouTube talks (as well as the producer of many excellent articles on his Substack) Davis advocates a “middle way — between reason and mystery, scepticism and sympathy, cool observation and participation mystique”.

His book churns up an unrelenting, often brilliant spray of ideas, funnelling a dizzying range of information across multiple cultural-intellectual byways and minority belief systems. Davis examines the seminal, future-shock cyberpunk fictions of William Gibson (Neuromancer) and Neal Stephenson (Snow Crash), and the gnostic dystopias of Philip K. Dick. But he also ranges onto magical-esoteric-religious history: the Greek-mythic trickster Hermes Trismegistus, John Dee’s “Enochian” magic, the discovery in 1945 of the heretical gnostic scriptures at Nag Hammadi — even the neuroscientist John C. Lilly’s experiments in telepathically communicating with dolphins by way of intravenous ketamine and a sensory-deprivation tank. He traces the Jesuit theologian Pierre Teilhard de Chardin’s messianic vision of a noospheric Omega Point — in which all minds are fused in an ecstatic oneness that brings the divine principle to its earthly apotheosis — through the delirious rhetoric of Nineties Wired-era tech propagandists and Singularity hucksters, whose effusions glossed the acceleration into our era of surveillance capitalism, epistemological fragmentation, radical inequality and political extremism.

Davis insists that the digital age teems with religious and supernatural metaphors. His distinction between the analogue “soul” and the digital “spirit” is memorably suggestive: “The analogue world sticks to its grooves of soul — warm, undulating, worn with the pops and scratches of material history. The digital world boots up the cool matrix of the spirit: luminous, abstract, more code than corporeality.” It is not lost on Davis that his book appeared shortly before the arrival in multiplexes of The Matrix, an exhilarating film that mainlined the mythos of simulation, alienation, deception and sacred uprising into the global psychic mainstream and perhaps primed teenage minds to plunge into the nightmarish hyperrealities of the decades to come.

He is also sympathetic to the neo-psychedelic culture that emerged from a post-Sixties underground to fuse with a more broadly gnostic cultural mood (you can’t open the New York Times or New Yorker these days without seeing articles on psychedelic research, though acknowledgement of the irreducible weirdness of psychedelics is less common). But Davis regards our connected devices and social media as the real acid in the civilisational punch-bowl: perhaps the ruptures we’ve lived through over the past decade are just the beginning of a cyberdelic horror-trip we’ll never wake up from.

Today, the digital future-present keeps echoing the magical-animist-pantheistic past — and vice versa. What are shamans if not “ecstatic technicians of the sacred”, LSD if not a “gnostic molecule”, and Gnosticism itself — the heretical Christian doctrine that declared our universe the botched work of a sinister demiurge, a lesser god — if not “the world’s first metaphysical conspiracy theory”? Meanwhile, the revival of virtual reality and the increasingly realistic texture of video games conjures up the ancient fantasy of simulation and nested realities (“the protagonists of Hindu yarns often found themselves wandering through infinite nests of Borgesian dream worlds”). It is only a short, psychotic leap from there to believing that consciousness is itself a game within a game, that there are levels parallel to or above this one. Even if reality turns out not to be a video game, actual games reanimate gnostic longings. “The boss characters and evil creatures who must be conquered to advance levels are the faint echoes of the threshold-dwellers and Keepers of the Gates that shamans and gnostics had to conquer in their mystic peregrinations of the other worlds.”

At least in its younger form, Davis’s is the sort of intelligence that can barely give itself room to breathe. TechGnosis covers so much ground that you wish he’d linger in one place for a while, probe more deeply into some of the intriguing areas he races over, or make more of the arresting, speculative ideas he frequently dashes out. His book’s enduring cult appeal has arguably made him a touch self-important, assessing (and hyping) its continuing relevance across a series of overlong afterwords. But these are quibbles against the high appeal of a writer so alert to “the powerful, archetypal connections among magic, tricks, and technology”, and the value of psychedelic thought in generating new metaphors. Davis writes of such strange, compelling things with real poetry, his meta-muse unleashed in “those vast cosmic webworks whose own mysterious designs we may glimpse, if at all, in moments past all sense or reckoning”.


Rob Doyle is an Irish novelist, short-story writer and essayist. His most recent book is Autobibliography.

RobDoyle1

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

32 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
AC Harper
AC Harper
10 months ago

Too long for a book blurb, too uncritical for a philosophical review. Apparently omits previous generational changes to public attitudes.
Shakespeare had a better grasp:

“Life’s but a walking shadow; a poor player, that struts and frets his hour upon the stage, and then is heard no more: it is a tale told by an idiot, full of sound and fury, signifying nothing.”

AC Harper
AC Harper
10 months ago

Too long for a book blurb, too uncritical for a philosophical review. Apparently omits previous generational changes to public attitudes.
Shakespeare had a better grasp:

“Life’s but a walking shadow; a poor player, that struts and frets his hour upon the stage, and then is heard no more: it is a tale told by an idiot, full of sound and fury, signifying nothing.”

Amy Harris
Amy Harris
10 months ago

Interesting essay! Yes, it has been fascinating to watch people lurch from perceived existential threat to perceived existential threat. Virus! AI! Nuclear War! Climate Change! Observing the irrational way people respond to their fears is alarming. Recently a Lib Dem MP expressed a desire to “gas” a room full of anti-ULEZ protesters because, well… hey, they are climate change deniers… so they are sub human if you really think about it! This incident was eye-opening, not just because the MP (so far) has not been expelled from his party for expressing murderous intent (along the lines of the desire many politicians and celebrities expressed for “death to the unvaccinated”) but also because I understood human nature a bit better. When groupthink (or propagandised ideology) takes hold and human beings are persuaded that whole groups of other human beings pose an existential threat to them (by being potentially infected with a disease, or holding the rational view that anthropogenic climate change is negligible, or being Russian) they become savage. Yes, all of this is thanks to a warping of the mind by electronic devices and an absence of faith. By contrast, the believe that there is a benevolent God who came to live amongst us in human form 2000 years ago, was persecuted and crucified, rose from the dead three days later and ascended into “heaven” seems plausible and rational. But the fever will never break via some announcement on Facebook. Men go mad in herds but recover their senses one by one. If you’ve been caught up in the insanity, try drastically limiting your time on digital devices and going to church. Although please find one where the vicar is not captured by woke ideology. The Irreverends podcast has a good resource on their website for finding churches where they still preach scripture. (Also, perhaps it would be a good idea to stay away from mind-altering drugs!)

Mangle Tangle
Mangle Tangle
10 months ago
Reply to  Amy Harris

‘Twas ever thus. Doomsayers at the end of the first millennium would have much sympathy with our modern panics.

Amy Harris
Amy Harris
10 months ago
Reply to  Mangle Tangle

Yes!

Damian Thompson
Damian Thompson
10 months ago
Reply to  Mangle Tangle

There’s no evidence of an upsurge in doomsday predictions at the end of the first millennium.

Amy Harris
Amy Harris
10 months ago
Reply to  Mangle Tangle

Yes!

Damian Thompson
Damian Thompson
10 months ago
Reply to  Mangle Tangle

There’s no evidence of an upsurge in doomsday predictions at the end of the first millennium.

Steve Murray
Steve Murray
10 months ago
Reply to  Amy Harris

I’d strongly advise against going to church. That’s just another example of succumbing to the groupthink you rightly rail against. I never cease to be amazed that otherwise sentient people can’t see this.

Amy Harris
Amy Harris
10 months ago
Reply to  Steve Murray

That’s a perspective that leads directly to transhumanism

Amy Harris
Amy Harris
10 months ago
Reply to  Steve Murray

That’s a perspective that leads directly to transhumanism

Mangle Tangle
Mangle Tangle
10 months ago
Reply to  Amy Harris

‘Twas ever thus. Doomsayers at the end of the first millennium would have much sympathy with our modern panics.

Steve Murray
Steve Murray
10 months ago
Reply to  Amy Harris

I’d strongly advise against going to church. That’s just another example of succumbing to the groupthink you rightly rail against. I never cease to be amazed that otherwise sentient people can’t see this.

Amy Harris
Amy Harris
10 months ago

Interesting essay! Yes, it has been fascinating to watch people lurch from perceived existential threat to perceived existential threat. Virus! AI! Nuclear War! Climate Change! Observing the irrational way people respond to their fears is alarming. Recently a Lib Dem MP expressed a desire to “gas” a room full of anti-ULEZ protesters because, well… hey, they are climate change deniers… so they are sub human if you really think about it! This incident was eye-opening, not just because the MP (so far) has not been expelled from his party for expressing murderous intent (along the lines of the desire many politicians and celebrities expressed for “death to the unvaccinated”) but also because I understood human nature a bit better. When groupthink (or propagandised ideology) takes hold and human beings are persuaded that whole groups of other human beings pose an existential threat to them (by being potentially infected with a disease, or holding the rational view that anthropogenic climate change is negligible, or being Russian) they become savage. Yes, all of this is thanks to a warping of the mind by electronic devices and an absence of faith. By contrast, the believe that there is a benevolent God who came to live amongst us in human form 2000 years ago, was persecuted and crucified, rose from the dead three days later and ascended into “heaven” seems plausible and rational. But the fever will never break via some announcement on Facebook. Men go mad in herds but recover their senses one by one. If you’ve been caught up in the insanity, try drastically limiting your time on digital devices and going to church. Although please find one where the vicar is not captured by woke ideology. The Irreverends podcast has a good resource on their website for finding churches where they still preach scripture. (Also, perhaps it would be a good idea to stay away from mind-altering drugs!)

Simon Neale
Simon Neale
10 months ago

The first edition of TechGnosis was published in 1998 — the same year, it’s always startling to recall, in which Google was founded

Sorry, I didn’t even find it startling the first time I heard this, which was about four minutes ago. And then whenever I recalled it, I found it progressively more unstartling.
Writers about drugs and “mysticism” are like train-spotters who stand whooping and shouting at the end of the platform, expecting everyone else to get excited by their enthusiasms.

Simon Neale
Simon Neale
10 months ago

The first edition of TechGnosis was published in 1998 — the same year, it’s always startling to recall, in which Google was founded

Sorry, I didn’t even find it startling the first time I heard this, which was about four minutes ago. And then whenever I recalled it, I found it progressively more unstartling.
Writers about drugs and “mysticism” are like train-spotters who stand whooping and shouting at the end of the platform, expecting everyone else to get excited by their enthusiasms.

Prashant Kotak
Prashant Kotak
10 months ago

“…Think of the recent hype around AI; how ready we are to project sentience and malign — Chtulian! — will onto a technology that, considered rationally, can never possess such qualities…”

Plain wrong, stemming from a misconceived notion of rationality. And notwithstanding that the author will, sadly, be forced, kicking and screaming, to change his mind on this in as little as 18 months (in the worst case), I ask him: how do you know about the “never possess” bit of that statement? Are you a god?

Last edited 10 months ago by Prashant Kotak
Alistair Quarterman
Alistair Quarterman
10 months ago
Reply to  Prashant Kotak

Because he was framing it ‘rationally’; i.e there is no evidence (at present) that a machine can become sentient.
‘Davis explores the myriad ways in which an ostensibly rationalist-materialist-atheist civilisation invests its new machines with ancient animism and archetypal dreams.’
I’m sure the author would be happy to change his mind given further evidence to the contrary.

Steve Murray
Steve Murray
10 months ago

I’ve no doubt he would, given that he appears to have undergone several “changes of mind” already in his short but overblown psychedelic existence.

Prashant Kotak
Prashant Kotak
10 months ago

“…there is no evidence (at present) that a machine can become sentient…”

So that of course begs the question: how will you and the author know, if a machine can become sentient. Do you have a test?

Amy Harris
Amy Harris
10 months ago
Reply to  Prashant Kotak

Yes, a sentient being has a biological/organic brain and nervous system. So, no, a “machine” built by man could never become sentient.

Prashant Kotak
Prashant Kotak
10 months ago
Reply to  Amy Harris

So just by the fact of the materials of biology and organic biochemistry, you can create sentience, but using different materials, say silicon and metals, there is no possibility of achieving the same result? That’s the equivalent of saying I cannot replace bits of my biology with electromechanicals, say for example a pacemaker. Now keep extending that comparison to the materials that the brain is composed of.

Unless you believe that the processes which generate sentience are locked in the materials of biochemistry and concomitant bioelectrical effects, and the same processes (or more effective ones) cannot possibly be created using different materials, say silicon and metals and electronics, you then instead in effect believe, that the biochemical materials that we are made of somehow create some mystic effect beyond mathematics and physics and chemistry. Notwithstanding that both of those argument stances can be refuted pretty instantly, I’m curious, which of those positions do you hold?

Amy Harris
Amy Harris
10 months ago
Reply to  Prashant Kotak

Exactly all of this. Your pacemaker is an artificial way of helping what was organically/biologically created continue working when, without it, you might have died. You have to have the organic life there in the first place before you enhance or help it. You can never make what wasn’t created organically a “sentient being”. What you CAN do (and this is what I call artificial intelligence) is full human brains with crazy nonsense that makes people do ridiculous things… you can fill people with “artificial intelligence” (such as “deadly pandemics” and “climate emergencies” and other ideologies that pose an artificial existential threat) and THEY become dangerous because you are capitalising on the fact that they are already, organically created, sentient beings. Instead of a pacemaker to enhance the workings of their heart, you’ve given them information to enhance the workings of despotic ideologues.

Prashant Kotak
Prashant Kotak
10 months ago
Reply to  Amy Harris

Humanity didn’t know how the mechanism of complex reproduction worked (linked to biological evolution, of which human sentience is an eventual product), but by the early 1950s, the answer arrived from two different directions. Watson/Crick/Franklin cracked the DNA/RNA mechanism, and from a completely different direction, John von Neumann demonstrated (by an existence proof created on paper) complex reproduction and evolution in cellular automata. He did it by tying a ‘Universal Turing Machine’ (aka the general purpose computer) to a ‘Universal Constructor’ – a remarkable algorithmic entity he engineered, which shows equivalence with the DNA/RNA mechanism. I hesitate to say there is nothing special about these processes because the Universal Constructor is an extraordinary entity (albeit not known about widely at all), but there is nothing mystic about all this. I won’t go into the Bayesian basis of what we loosely term cognitive function, but by ‘evolution’ within cellular state space, I mean a playout where entities eventually create other entities more complex/capable than themselves. An example of such a cellular automata playout is the ‘Game of Life’ created by mathematician Conway.

I recommend, checkout what the ‘von Neumann Universal Constructor’ is, and also ‘Game of Life’ by Conway.

Last edited 10 months ago by Prashant Kotak
Amy Harris
Amy Harris
10 months ago
Reply to  Prashant Kotak

Or… God created us in his image. But thanks for speaking to the title of this essay so well.

Amy Harris
Amy Harris
10 months ago
Reply to  Prashant Kotak

Or… God created us in his image. But thanks for speaking to the title of this essay so well.

Andrew Dalton
Andrew Dalton
10 months ago
Reply to  Amy Harris

The implementation of an artificial neural network is relatively simple in terms of measurements such as lines of code – if not necessarily the design and mathematical theory that underpins them. This is, to a degree, true of the brain, where neurons are connected via synapses that adapt following stimuli. The complexity emerges as a result of the sheer number of neurons and synapses and importantly how densely they may be packed, which has been the problem with hardware neural nets as our printed silicon chips just don’t easily model this kind structure.

Software, which can model this quite easily (from a design point of view), has serious performance problems. Thanks to several technologies constantly improving – most notably the hyper-parallel processing capabilities of GPUs (graphical processing units) since the nineties and their use in non-graphical endeavours since the late nineties (general processing on the GPU, known as GPGPU). This allows software simulations to actually approach the complexity of the human brain in terms of modelled neurons and synapses (parameters).

Due to Moore’s Law, the observation that the number of transistors per unit area of silicon roughly doubles every two years (although it’s tapering now), we’re at the point of waiting for the models to supercede the human brain. If consciousness is simply a result of complexity, it is a matter of time for consciousness to emerge.

If there are other factors to consciousness that are unknown, then maybe it will not be the case.

Prashant Kotak
Prashant Kotak
10 months ago
Reply to  Amy Harris

Humanity didn’t know how the mechanism of complex reproduction worked (linked to biological evolution, of which human sentience is an eventual product), but by the early 1950s, the answer arrived from two different directions. Watson/Crick/Franklin cracked the DNA/RNA mechanism, and from a completely different direction, John von Neumann demonstrated (by an existence proof created on paper) complex reproduction and evolution in cellular automata. He did it by tying a ‘Universal Turing Machine’ (aka the general purpose computer) to a ‘Universal Constructor’ – a remarkable algorithmic entity he engineered, which shows equivalence with the DNA/RNA mechanism. I hesitate to say there is nothing special about these processes because the Universal Constructor is an extraordinary entity (albeit not known about widely at all), but there is nothing mystic about all this. I won’t go into the Bayesian basis of what we loosely term cognitive function, but by ‘evolution’ within cellular state space, I mean a playout where entities eventually create other entities more complex/capable than themselves. An example of such a cellular automata playout is the ‘Game of Life’ created by mathematician Conway.

I recommend, checkout what the ‘von Neumann Universal Constructor’ is, and also ‘Game of Life’ by Conway.

Last edited 10 months ago by Prashant Kotak
Andrew Dalton
Andrew Dalton
10 months ago
Reply to  Amy Harris

The implementation of an artificial neural network is relatively simple in terms of measurements such as lines of code – if not necessarily the design and mathematical theory that underpins them. This is, to a degree, true of the brain, where neurons are connected via synapses that adapt following stimuli. The complexity emerges as a result of the sheer number of neurons and synapses and importantly how densely they may be packed, which has been the problem with hardware neural nets as our printed silicon chips just don’t easily model this kind structure.

Software, which can model this quite easily (from a design point of view), has serious performance problems. Thanks to several technologies constantly improving – most notably the hyper-parallel processing capabilities of GPUs (graphical processing units) since the nineties and their use in non-graphical endeavours since the late nineties (general processing on the GPU, known as GPGPU). This allows software simulations to actually approach the complexity of the human brain in terms of modelled neurons and synapses (parameters).

Due to Moore’s Law, the observation that the number of transistors per unit area of silicon roughly doubles every two years (although it’s tapering now), we’re at the point of waiting for the models to supercede the human brain. If consciousness is simply a result of complexity, it is a matter of time for consciousness to emerge.

If there are other factors to consciousness that are unknown, then maybe it will not be the case.

Mangle Tangle
Mangle Tangle
10 months ago
Reply to  Prashant Kotak

“ That’s the equivalent of saying I cannot replace bits of my biology with electromechanicals, say for example a pacemaker. Now keep extending that comparison to the materials that the brain is composed of.” I recommend you have a good think about Zeno’s paradox; then you might see why your argument doesn’t hold.

Prashant Kotak
Prashant Kotak
10 months ago
Reply to  Mangle Tangle

If you can’t articulate the reason my argument fails but instead you have to resort to asking me to contemplate Zeno, then I submit, you have lost the argument.

Prashant Kotak
Prashant Kotak
10 months ago
Reply to  Mangle Tangle

If you can’t articulate the reason my argument fails but instead you have to resort to asking me to contemplate Zeno, then I submit, you have lost the argument.

Al Quarterman
Al Quarterman
3 months ago
Reply to  Prashant Kotak

We don’t even know what human consciousness is or how it’s generated. I do think this is always glossed over in any debate about conscious machines and talk of downloading our own consciousnesses onto computers, etc. Yes, maybe silicon, metals and electronics can generate epiphenomenal states but is there any way we can be sure they’d equate to human epiphenomenal states?

Amy Harris
Amy Harris
10 months ago
Reply to  Prashant Kotak

Exactly all of this. Your pacemaker is an artificial way of helping what was organically/biologically created continue working when, without it, you might have died. You have to have the organic life there in the first place before you enhance or help it. You can never make what wasn’t created organically a “sentient being”. What you CAN do (and this is what I call artificial intelligence) is full human brains with crazy nonsense that makes people do ridiculous things… you can fill people with “artificial intelligence” (such as “deadly pandemics” and “climate emergencies” and other ideologies that pose an artificial existential threat) and THEY become dangerous because you are capitalising on the fact that they are already, organically created, sentient beings. Instead of a pacemaker to enhance the workings of their heart, you’ve given them information to enhance the workings of despotic ideologues.

Mangle Tangle
Mangle Tangle
10 months ago
Reply to  Prashant Kotak

“ That’s the equivalent of saying I cannot replace bits of my biology with electromechanicals, say for example a pacemaker. Now keep extending that comparison to the materials that the brain is composed of.” I recommend you have a good think about Zeno’s paradox; then you might see why your argument doesn’t hold.

Al Quarterman
Al Quarterman
3 months ago
Reply to  Prashant Kotak

We don’t even know what human consciousness is or how it’s generated. I do think this is always glossed over in any debate about conscious machines and talk of downloading our own consciousnesses onto computers, etc. Yes, maybe silicon, metals and electronics can generate epiphenomenal states but is there any way we can be sure they’d equate to human epiphenomenal states?

Prashant Kotak
Prashant Kotak
10 months ago
Reply to  Amy Harris

So just by the fact of the materials of biology and organic biochemistry, you can create sentience, but using different materials, say silicon and metals, there is no possibility of achieving the same result? That’s the equivalent of saying I cannot replace bits of my biology with electromechanicals, say for example a pacemaker. Now keep extending that comparison to the materials that the brain is composed of.

Unless you believe that the processes which generate sentience are locked in the materials of biochemistry and concomitant bioelectrical effects, and the same processes (or more effective ones) cannot possibly be created using different materials, say silicon and metals and electronics, you then instead in effect believe, that the biochemical materials that we are made of somehow create some mystic effect beyond mathematics and physics and chemistry. Notwithstanding that both of those argument stances can be refuted pretty instantly, I’m curious, which of those positions do you hold?

Andrew Dalton
Andrew Dalton
10 months ago
Reply to  Prashant Kotak

I agree, consciousness is not fully understood, which makes it rather difficult to determine if an artificial neural network has attained sentience.
However, the prevailing hypothesis that it is an emergent property of a sufficiently complex parallel network would suggest it is certainly a possibility at absolute minimum.
That said, I’m not so sure that most people I share this planet with are actually conscious entities, so who knows?

Last edited 10 months ago by Andrew Dalton
Prashant Kotak
Prashant Kotak
10 months ago
Reply to  Andrew Dalton

That point you raised about parallel networks – in fact there is no mathematical difference of outcome for parallel networks to a single processor, except speed of processing. And that too only in situations where the extra time taken for communication between nodes doesn’t kill off the advantage of processing in parallel. The number of spatial dimensions matter though – processing on a 2D surface takes longer to traverse back and forth to the cells you need to get to compared to processing on a 3D surface, where you you have access to many more adjacent cells. In line with this the more spatial dimensions your processing surface has the faster you can access memory and code cells you need. The ultimate result from algorithmic processing is identical whether you do it on an abacus and (a very long) piece of paper (state memory) or if you use a massive processing cluster with billions of processing nodes in any number of dimensions you like. You just get to your result a lot slower on the abacus. And since ‘time’ in this context is relative between processing entities, the speed matters not a jot unless you are communicating to other entities who are a lot lot faster. Which we are about to, to our cost.

Last edited 10 months ago by Prashant Kotak
Prashant Kotak
Prashant Kotak
10 months ago
Reply to  Andrew Dalton

That point you raised about parallel networks – in fact there is no mathematical difference of outcome for parallel networks to a single processor, except speed of processing. And that too only in situations where the extra time taken for communication between nodes doesn’t kill off the advantage of processing in parallel. The number of spatial dimensions matter though – processing on a 2D surface takes longer to traverse back and forth to the cells you need to get to compared to processing on a 3D surface, where you you have access to many more adjacent cells. In line with this the more spatial dimensions your processing surface has the faster you can access memory and code cells you need. The ultimate result from algorithmic processing is identical whether you do it on an abacus and (a very long) piece of paper (state memory) or if you use a massive processing cluster with billions of processing nodes in any number of dimensions you like. You just get to your result a lot slower on the abacus. And since ‘time’ in this context is relative between processing entities, the speed matters not a jot unless you are communicating to other entities who are a lot lot faster. Which we are about to, to our cost.

Last edited 10 months ago by Prashant Kotak
Mangle Tangle
Mangle Tangle
10 months ago
Reply to  Prashant Kotak

Yes, there is one. If the outcome of some input to a machine/AI is deterministic (back-trackeable), then it’s not sentient. Period.

Prashant Kotak
Prashant Kotak
10 months ago
Reply to  Mangle Tangle

Why on earth (or even on heaven) would a system or entity being deterministic preclude sentience?

Meredith Brooks
Meredith Brooks
10 months ago
Reply to  Prashant Kotak

Would Godel’s Incompleteness Theorem not do so, on the basis that a human mind can handle statements about multiple sets of mutually exclusive axioms simultaneously, whereas there is no deterministic mathematical system that can do the same, due to the inevitable paradoxes that arise? And you can’t build some sort of deterministic set of ‘virtual machines’ that run on the respective sets of axioms as these machines themselves would have to operate on a single coherent system composed of a set of axioms that conflict with one of the simulated systems? Perhaps there is an escape clause here insofar as the ‘operating system’ these ‘virtual machines’ run on may not need to run at the same conceptual-symbolic level as the machines it is simulating.
Some criticisms of that argument (The Penrose Lucas argument) are here: https://en.wikipedia.org/wiki/Penrose%E2%80%93Lucas_argument

Prashant Kotak
Prashant Kotak
10 months ago

To specifically tackle the argument you are making, I would nowadays challenge the assertion that “…a human mind can handle statements about multiple sets of mutually exclusive axioms simultaneously…”, as in the LaForte/Minsky type counters. What if the issue lies with what humans perceive to be true, resulting from the computations of an inconsistent Turing machine in the human mind? Nothing says evolution *has* to throw up a mathematically watertight formal system. As in, human mathematics isn’t wrong across the board, but in some places, for example Cantor type set logic when asserting things about infinites.

I come from a Computer Science background with early exposure to Artificial Intelligence – I learnt about neural nets for example as an undergrad in the early 1980s – and the question if what we experience as human sentience is algorithmic in nature or not has been a bit of an obsession for me for over four decades.

I was enamored of Penrose-Lucas type arguments for a long time ever since I came across them in the mid 90s via Penrose’s books (the absolutely superb ‘The Emporer’s New Mind’ and ‘Shadows of the Mind’), because my instinctive belief through my 20s and 30s was that human sentience could not possibly be algorithmic (for reasons). However, over the years I have learnt that instinctive reactions are not necessarily always a good guide, and I have become steadily less convinced by Penrose’s argument that Godel’s Incompleteness Theorems refute the possibility that human-like sentience is algorithmic. At a technical level, I can follow computational maths and logic like the work of Turing, Church, Kleene, Post etc, the Church-Turing thesis, Halting Problem etc, and even the Incompleteness Theorems, although following Godel is more difficult for me because it invariably pulls me down into mathematical technicalities which take a long time to untangle and I’m never quite sure my understanding is completely secure.

I don’t have a sufficiently strong maths/physics background to argue back at the level of axiomatic systems to the Penrosian arguments, but I essentially started moving away from Penrose’s hypothesis, because Penrose extends his argument into hypothesising that human sentience is linked into quantum physics, specifically wave function collapse, but he then speculates that this collapse has a trigger (the ‘one graviton’ level) – which to my eyes pulls us back into versions of deterministic universes. Over the last decade or so I have become more and more convinced that human sentience is likely algorithmic for a couple of reasons, firstly because there are lines of reasoning indicating this, which would take too long to discuss here, but which I haven’t seen counters to, and secondly the circumstancial evidence has been been piling up from multiple directions, and absolutely thick and fast since the Large Language Models emerged, and especially so since GPT-3.5.

Last edited 10 months ago by Prashant Kotak
Andrew Dalton
Andrew Dalton
10 months ago
Reply to  Prashant Kotak

Intriguing post.
I also studied neural nets as an under grad, but the mathematical theory was a little much for me, so I stuck with visualisations, which I found a bit less abstract.
The way things are moving now, I wish I was a bit more persistent with neural nets at the time.

Andrew Dalton
Andrew Dalton
10 months ago
Reply to  Prashant Kotak

Intriguing post.
I also studied neural nets as an under grad, but the mathematical theory was a little much for me, so I stuck with visualisations, which I found a bit less abstract.
The way things are moving now, I wish I was a bit more persistent with neural nets at the time.

Prashant Kotak
Prashant Kotak
10 months ago

To specifically tackle the argument you are making, I would nowadays challenge the assertion that “…a human mind can handle statements about multiple sets of mutually exclusive axioms simultaneously…”, as in the LaForte/Minsky type counters. What if the issue lies with what humans perceive to be true, resulting from the computations of an inconsistent Turing machine in the human mind? Nothing says evolution *has* to throw up a mathematically watertight formal system. As in, human mathematics isn’t wrong across the board, but in some places, for example Cantor type set logic when asserting things about infinites.

I come from a Computer Science background with early exposure to Artificial Intelligence – I learnt about neural nets for example as an undergrad in the early 1980s – and the question if what we experience as human sentience is algorithmic in nature or not has been a bit of an obsession for me for over four decades.

I was enamored of Penrose-Lucas type arguments for a long time ever since I came across them in the mid 90s via Penrose’s books (the absolutely superb ‘The Emporer’s New Mind’ and ‘Shadows of the Mind’), because my instinctive belief through my 20s and 30s was that human sentience could not possibly be algorithmic (for reasons). However, over the years I have learnt that instinctive reactions are not necessarily always a good guide, and I have become steadily less convinced by Penrose’s argument that Godel’s Incompleteness Theorems refute the possibility that human-like sentience is algorithmic. At a technical level, I can follow computational maths and logic like the work of Turing, Church, Kleene, Post etc, the Church-Turing thesis, Halting Problem etc, and even the Incompleteness Theorems, although following Godel is more difficult for me because it invariably pulls me down into mathematical technicalities which take a long time to untangle and I’m never quite sure my understanding is completely secure.

I don’t have a sufficiently strong maths/physics background to argue back at the level of axiomatic systems to the Penrosian arguments, but I essentially started moving away from Penrose’s hypothesis, because Penrose extends his argument into hypothesising that human sentience is linked into quantum physics, specifically wave function collapse, but he then speculates that this collapse has a trigger (the ‘one graviton’ level) – which to my eyes pulls us back into versions of deterministic universes. Over the last decade or so I have become more and more convinced that human sentience is likely algorithmic for a couple of reasons, firstly because there are lines of reasoning indicating this, which would take too long to discuss here, but which I haven’t seen counters to, and secondly the circumstancial evidence has been been piling up from multiple directions, and absolutely thick and fast since the Large Language Models emerged, and especially so since GPT-3.5.

Last edited 10 months ago by Prashant Kotak
Meredith Brooks
Meredith Brooks
10 months ago
Reply to  Prashant Kotak

Would Godel’s Incompleteness Theorem not do so, on the basis that a human mind can handle statements about multiple sets of mutually exclusive axioms simultaneously, whereas there is no deterministic mathematical system that can do the same, due to the inevitable paradoxes that arise? And you can’t build some sort of deterministic set of ‘virtual machines’ that run on the respective sets of axioms as these machines themselves would have to operate on a single coherent system composed of a set of axioms that conflict with one of the simulated systems? Perhaps there is an escape clause here insofar as the ‘operating system’ these ‘virtual machines’ run on may not need to run at the same conceptual-symbolic level as the machines it is simulating.
Some criticisms of that argument (The Penrose Lucas argument) are here: https://en.wikipedia.org/wiki/Penrose%E2%80%93Lucas_argument

Dumetrius
Dumetrius
10 months ago
Reply to  Mangle Tangle

That seems interesting but one would have to take it apart more than I feel like at present. Maybe after lunch.

Prashant Kotak
Prashant Kotak
10 months ago
Reply to  Mangle Tangle

Why on earth (or even on heaven) would a system or entity being deterministic preclude sentience?

Dumetrius
Dumetrius
10 months ago
Reply to  Mangle Tangle

That seems interesting but one would have to take it apart more than I feel like at present. Maybe after lunch.

Al Quarterman
Al Quarterman
3 months ago
Reply to  Prashant Kotak

How do I know you’re sentient? I don’t. I don’t have enough evidence and some might just say, given I will never be able to access you’re inner experience of reality, I can never know. Maybe I’m a solipsist; given I can’t access anyone’s inner experince it seems like a rational explanation. I’m surprised there aren’t more solpisists. (one for the Russell fans there)

Amy Harris
Amy Harris
10 months ago
Reply to  Prashant Kotak

Yes, a sentient being has a biological/organic brain and nervous system. So, no, a “machine” built by man could never become sentient.

Andrew Dalton
Andrew Dalton
10 months ago
Reply to  Prashant Kotak

I agree, consciousness is not fully understood, which makes it rather difficult to determine if an artificial neural network has attained sentience.
However, the prevailing hypothesis that it is an emergent property of a sufficiently complex parallel network would suggest it is certainly a possibility at absolute minimum.
That said, I’m not so sure that most people I share this planet with are actually conscious entities, so who knows?

Last edited 10 months ago by Andrew Dalton
Mangle Tangle
Mangle Tangle
10 months ago
Reply to  Prashant Kotak

Yes, there is one. If the outcome of some input to a machine/AI is deterministic (back-trackeable), then it’s not sentient. Period.

Al Quarterman
Al Quarterman
3 months ago
Reply to  Prashant Kotak

How do I know you’re sentient? I don’t. I don’t have enough evidence and some might just say, given I will never be able to access you’re inner experience of reality, I can never know. Maybe I’m a solipsist; given I can’t access anyone’s inner experince it seems like a rational explanation. I’m surprised there aren’t more solpisists. (one for the Russell fans there)

Steve Murray
Steve Murray
10 months ago

I’ve no doubt he would, given that he appears to have undergone several “changes of mind” already in his short but overblown psychedelic existence.

Prashant Kotak
Prashant Kotak
10 months ago

“…there is no evidence (at present) that a machine can become sentient…”

So that of course begs the question: how will you and the author know, if a machine can become sentient. Do you have a test?

Mangle Tangle
Mangle Tangle
10 months ago
Reply to  Prashant Kotak

No, he isn’t. The AI is.

Alistair Quarterman
Alistair Quarterman
10 months ago
Reply to  Prashant Kotak

Because he was framing it ‘rationally’; i.e there is no evidence (at present) that a machine can become sentient.
‘Davis explores the myriad ways in which an ostensibly rationalist-materialist-atheist civilisation invests its new machines with ancient animism and archetypal dreams.’
I’m sure the author would be happy to change his mind given further evidence to the contrary.

Mangle Tangle
Mangle Tangle
10 months ago
Reply to  Prashant Kotak

No, he isn’t. The AI is.

Prashant Kotak
Prashant Kotak
10 months ago

“…Think of the recent hype around AI; how ready we are to project sentience and malign — Chtulian! — will onto a technology that, considered rationally, can never possess such qualities…”

Plain wrong, stemming from a misconceived notion of rationality. And notwithstanding that the author will, sadly, be forced, kicking and screaming, to change his mind on this in as little as 18 months (in the worst case), I ask him: how do you know about the “never possess” bit of that statement? Are you a god?

Last edited 10 months ago by Prashant Kotak
Damian Thompson
Damian Thompson
10 months ago

“Astrology and occultism flourish in mainstream daylight, while a revived interest in psychedelic experience and synthetic drugs has opened up gnostic wormholes amid the high-res sound-systems of nightclubs that seem more than ever like techno-pagan temples.” All this could more accurately have been written about the 1990s, when New Age quasi-religious impulses seemed to be flourishing – it didn’t last long – and night club culture was much more vibrant than it is now. The thing is: you hardly ever meet anyone whose interest in occultism or techno-gnosticism is more than a talking point. People don’t confuse technology with magic; the admittedly dramatic implications of digital technology don’t extend to an alteration of everyday consciousness. If they spot a gnostic wormhole they just step round it. I read TechGnosis when it came out: some good points buried in stoner’s bullshit.