For at stake in the episode of the Golden Calf is not simply whether the Israelites will live by God’s law, but whether they will succeed in authoring a radically revisionist history of the Exodus. In effectively declaring that they freed themselves, the Israelites erase all debts to God and Moses. And to their slavish minds, freedom means licence; the Israelites’ implicit self-deification authorises the next day’s feasting, drinking, and sexual play. The fitting symbol of the people’s self-exaltation is a mass of gold — a precious metal that, even in Pharaoh’s day, was convertible into the primary objects of appetite, including power, honour, comfort and pleasure.
Today, the World Economic Forum imagines that AI will lead us to a less primitive “utopia”, a 21st-century Promised Land in which people will “spend their time on leisure, creative, and spiritual pursuits”. A safer bet would be drugs and sex robots. Ninety years ago, John Maynard Keynes prophesied, with what looks like eerie accuracy, that machines would make labour obsolete within a century. The prospect filled him with “dread”, because very few people have been educated for leisure.
Judging by the bad behaviour of the wealthy, an “advance guard… spying out the promised land of leisure for the rest of us and pitching their camp there”, Keynes found the outlook “very depressing”. And to those who, freed from labour, looked forward to doing nothing but listening to songs, he replied: “it will only be for those who have to do with the singing that life will be tolerable and how few of us can sing!”
In 2018, an article in Scientific American predicted that advanced AI will “augment our abilities, enhancing our humanness in unprecedented ways”. This Pollyannaish prognosis ignores the fact that all human capacities tend to atrophy in disuse. In particular, AI is inexorably changing the way we think (or don’t). Students now use ChatGPT to do their homework for professors who perhaps rely on it to write their lectures. What makes this absurd scenario amusing is not just the thought of machines talking to machines, but that intellectually lazy people would employ a simulacrum of human intelligence for the sake of mutual deception.
Compared with the natural endowment of human intelligence, the artificial kind is an oxymoron, like “genuine imitation leather”. AI is a mechanical simulation of only one part of intelligence: the capacity of discursive thinking, or the analysis and synthesis of information. Discursive thinking deals with humanly constructed tokens, including numerical and linguistic symbols (or, in the case of AI, digitally encoded data). While human intelligence can compare these tokens with the things they represent, AI cannot because it lacks intuition: the immediate cognition of reality that roots us in the world and directs our energies beyond ourselves and the operations of our own minds. It is intuition, for example, that tells us whether our nearest and dearest are fundamentally worthy of trust. (Needless to say, intuition is fallible, like any other intellectual operation.)
AI has no direct and concrete ties to the actual world, to which it relates only through the medium of binary notation. Self-enclosed in the electronic ether, it dwells nowhere, fears and loves nothing, and has no individual point of view. Does it make sense to grant autonomy and agency to an intelligence that has no natural connection to human needs?
The strength of sophisticated AI is its capacity to sort through massive quantities of data, aggregating and disaggregating discrete bits of information in potentially meaningful ways. This is a promising capability with applications in multiple fields from medicine to transportation. But AI’s productions are artificial regurgitations of material skimmed from vast but shallow pools of digital content and manipulated in ways limited, at least in principle, only by the constraints of programmers.
This can be enormously useful when it comes to detecting patterns of information that would otherwise be invisible to the human eye. Many problems, however, cannot be meaningfully approached by mining Big Data. Asked questions of an ethical or political nature, AI can either refuse to give a definitive response, or it can scour databases for opinions and return what it calculates is the most likely answer. But whether any answer generated in this manner is just or wise can only be a matter of happenstance. This is due, in part, to programming bias, including over or under-weighted data sets. When ChatGPT or Google’s chatbot Bard are asked to evaluate Biden and Trump, for example, their Leftward slant is obvious. (Try asking ChatGPT to write essays comparing each of these presidents to Stalin.)
A more fundamental problem is that machine learning is simply not equipped to sift information according to ill-defined qualitative measures like justice or wisdom. This would be the case even if just or wise perspectives were common on the internet, which they are not. Although no one fully understands how advanced AI works, the old saying applies no less to it than to the simplest computer programmes: “Garbage in, garbage out.”
To the extent that AI remains within the limits of its capabilities, it is because programmers have intentionally constrained its activity. What happens when, for their own all-too-human reasons — the desire for power, honour, and wealth; national pride; or simply the fear of losing their jobs — they remove these constraints? Or when, having well and truly lost the habit of thinking for ourselves, people in general are willing to grant AI authority over matters it is not equipped to handle? Would anyone be surprised if tomorrow someone launched an AI-driven Justice App that promises to settle practical issues of distribution and retribution on the spot? Or if, taking our cue from a well-known software company (NYSE: ORCL), we were someday soon to treat that App — or some other algorithmically-generated distillation of aggregated opinions — as though it were an oracle?
That day is fast approaching, if it is not already here. AI is now substituting for clergy in religious rituals and ceremonies, and Catholics can even utilise a Confession Chatbot. A recent article that sees a use for AI in writing sermons nevertheless observes one limitation on the pastoral employment of machines: “speaking God’s word to a congregation or to an individual requires [personal] relationship.” But AI is incapable of any direct relationship with human beings, including one that is open to the possibility of faith. How could a congregation trust a religious leader, much less a God, that cannot reciprocate this trust?
The Israelite experiment with idolatry ended in disaster. After the episode of the Golden Calf, Moses ordered the Levites to take their swords and purge the camp of wrongdoers. “Slay every man his brother,” he commanded, “and every man his companion, and every man his neighbour.” Should we not expect a similarly bloody consummation when, heeding the utterly irresponsible voice of AI as though it were the Word of God, we once again reach peak idolatry?
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeThe purpose of much of AI is to proletarianize 99% of the population and concentrate power and wealth in the hands of the other 1%. When the 99% have no skills, because the machines are doing everything, and no way of supporting themselves, they will be fully dependent on the crumbs falling from the tables of the 1%. No life of leisure, just a life of subsistence.
Just think how much easier life would be for Sunak and Hunt, if they could replace those pesky junior doctors and nurses with machines. In fact, they could then close down the NHS.
That’s the best you can do? Really? Third rate childish fake Marxist analysis…why not try at least to have some attempt at an intellectual idea?
I think you are confusing purpose and effect. It’s just like Mary Harrington once wrote, what if no one‘s in charge? Isn’t that more scary than some power-hungry one percent?
It is, and that goes a long way to explaining conspiracy theories. Comfort junk food for the existentially threatened.
It is, and that goes a long way to explaining conspiracy theories. Comfort junk food for the existentially threatened.
The article is a little paranoid but this comment completely eclipses it in that regard.
That’s the best you can do? Really? Third rate childish fake Marxist analysis…why not try at least to have some attempt at an intellectual idea?
I think you are confusing purpose and effect. It’s just like Mary Harrington once wrote, what if no one‘s in charge? Isn’t that more scary than some power-hungry one percent?
The article is a little paranoid but this comment completely eclipses it in that regard.
The purpose of much of AI is to proletarianize 99% of the population and concentrate power and wealth in the hands of the other 1%. When the 99% have no skills, because the machines are doing everything, and no way of supporting themselves, they will be fully dependent on the crumbs falling from the tables of the 1%. No life of leisure, just a life of subsistence.
Just think how much easier life would be for Sunak and Hunt, if they could replace those pesky junior doctors and nurses with machines. In fact, they could then close down the NHS.
The link between AI and its ultimate journey towards transhumanism is logical because we have no belief in the transcendent. To accept being human, to accept embodiment, is to accept death. The transcendent provided some hope in the face of our mortality. The death of religion required us to build another God, one that might recreate hope in the upload of technological eternity (digital consciousness, singularity). To say that religious belief is the problem misses the point for me. The problem is man’s need for such belief – without transdendence we are totally demoralised and lost. We appear to need our Gods.
Isn’t belief in the transcendent literally belief in a superintelligence? As to the nature of that superintelligence, what does it matter who created it?
Maybe, if you’re a platonist. AI is as remote and impersonal as the platonic god. But it can be as capricious as the God of scripture. (It doesn’t do geometry but “calculates” discursive probabilities.) This makes for one hell of an idol, and I take that to be Howland’s point.
In particular, AI is inexorably changing the way we think (or don’t).
Yes indeed, leading to the apex in the fast growing Idiocracy.
In particular, AI is inexorably changing the way we think (or don’t).
Yes indeed, leading to the apex in the fast growing Idiocracy.
No, transcendence has to do non-dependence. If it’s made it’s by definition not transcendent. Obviously the question can be transendent relative to what. But in terms of ultimate reality, which is what we’re speaking of, intelligence is only one characteristic. Laslty, that article makes the claim that marshaling data (of all kinds) and even rearranging based on connections is not the same as intuition or judgement of ethics, much less ethos and reality.
It’s all man made.
It matters plenty. AI may be artificial, but it isn’t intelligent. It is a mindless mechanism that carries out the instructions programmed into it. That, on occasion, the consequences of that programming may surprise its creator is not a measure of intelligence, but a measure of the incompetence of the programmer.
“…AI…it isn’t intelligent…”
This is patently not true, intelligent it most definitely is. What it may not be is sentient, in the sense of experiencing qualia. And this gets us to the question at the heart of all this: why would the AI we create do anything at all, other than what we ask it to do? A piece of code is inert in and of itself after all. No one is going to claim for example, that a bubble sort written in C# is sentient. And I won’t go into a detailed response here because that would mean me waffling on for several more paragraphs, but the signs from the LLMs are alarming – to my eyes they are spitting out sparks of self-declared emergent selfhood. Your, and the author’s claim is in effect, this is a mangled but mindless regurgitation of a vast amount of human data which includes text of humans pondering “who am I?” and nothing more. And many will simply say that I am simply too credulous, but my stance is, this is what I too hoped and believed for years, and I still hope you are right, but having engaged in this debate for forty odd years, to my eyes more and more evidence is stacking up that with certain organisations of data capable of recursing back on itself, and past certain tipping points in complexity, something different emerges which is more than “mindless” as you claim.
And one step beyond all that, it is very obvious to me now, that the AI we are creating will eventually display, what looks to all intents and purposes to us, like agency, no matter how much you and others like the author claim that that there is “no one home” so to speak. And the timeline for this has shortened to under a decade in the worst case. At that point, the question of sentient or not becomes an irrelevance, it lies face down in the dust.
Then you are free to look into the electric eyes of your display screen, smell its exhaust breath, feel it’s warm plastic skin and make love to it.
By your reasoning, my casio calculater, circa 1979, is intelligent.
There is no intellgence without sentience – just programming.
You can argue that one out with Edsger Dijkstra:
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim”
A submarine cannot swim, and a computer cannot think. This is easy!
A submarine cannot swim, and a computer cannot think. This is easy!
You can argue that one out with Edsger Dijkstra:
“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim”
It is in a way rather arrogant that we seem to think that only humans can be intelligent or indeed sapient or have agency. Aside from the underlying question of whether AI will have any of the above however, one can argue that it doesn’t matter as it will not distinguishable from the ‘real thing’ anymore.
Experts like Eliezer Yudkowski warned years ago that there are two things we should never do: 1) allow AI to learn how to code (done) and 2) connect it to the internet (done).
I find it all so much more terrifying than climate and wars and all the other issues currently high on the agenda. Sometimes even struggle to avoid a fatalist mindset. I fear that most people simply cannot fathom AI beyond some big smart gpt or high spec computer programme. The scale of how fast and capable it will be once it starts writing its own code and Setting its own goals rather than being a tool aligned with our own is beyond our imagination.
Then you are free to look into the electric eyes of your display screen, smell its exhaust breath, feel it’s warm plastic skin and make love to it.
By your reasoning, my casio calculater, circa 1979, is intelligent.
There is no intellgence without sentience – just programming.
It is in a way rather arrogant that we seem to think that only humans can be intelligent or indeed sapient or have agency. Aside from the underlying question of whether AI will have any of the above however, one can argue that it doesn’t matter as it will not distinguishable from the ‘real thing’ anymore.
Experts like Eliezer Yudkowski warned years ago that there are two things we should never do: 1) allow AI to learn how to code (done) and 2) connect it to the internet (done).
I find it all so much more terrifying than climate and wars and all the other issues currently high on the agenda. Sometimes even struggle to avoid a fatalist mindset. I fear that most people simply cannot fathom AI beyond some big smart gpt or high spec computer programme. The scale of how fast and capable it will be once it starts writing its own code and Setting its own goals rather than being a tool aligned with our own is beyond our imagination.
“…AI…it isn’t intelligent…”
This is patently not true, intelligent it most definitely is. What it may not be is sentient, in the sense of experiencing qualia. And this gets us to the question at the heart of all this: why would the AI we create do anything at all, other than what we ask it to do? A piece of code is inert in and of itself after all. No one is going to claim for example, that a bubble sort written in C# is sentient. And I won’t go into a detailed response here because that would mean me waffling on for several more paragraphs, but the signs from the LLMs are alarming – to my eyes they are spitting out sparks of self-declared emergent selfhood. Your, and the author’s claim is in effect, this is a mangled but mindless regurgitation of a vast amount of human data which includes text of humans pondering “who am I?” and nothing more. And many will simply say that I am simply too credulous, but my stance is, this is what I too hoped and believed for years, and I still hope you are right, but having engaged in this debate for forty odd years, to my eyes more and more evidence is stacking up that with certain organisations of data capable of recursing back on itself, and past certain tipping points in complexity, something different emerges which is more than “mindless” as you claim.
And one step beyond all that, it is very obvious to me now, that the AI we are creating will eventually display, what looks to all intents and purposes to us, like agency, no matter how much you and others like the author claim that that there is “no one home” so to speak. And the timeline for this has shortened to under a decade in the worst case. At that point, the question of sentient or not becomes an irrelevance, it lies face down in the dust.
No. Belief in the transcendent is belief that something exists outside the material world.
Maybe, if you’re a platonist. AI is as remote and impersonal as the platonic god. But it can be as capricious as the God of scripture. (It doesn’t do geometry but “calculates” discursive probabilities.) This makes for one hell of an idol, and I take that to be Howland’s point.
No, transcendence has to do non-dependence. If it’s made it’s by definition not transcendent. Obviously the question can be transendent relative to what. But in terms of ultimate reality, which is what we’re speaking of, intelligence is only one characteristic. Laslty, that article makes the claim that marshaling data (of all kinds) and even rearranging based on connections is not the same as intuition or judgement of ethics, much less ethos and reality.
It’s all man made.
It matters plenty. AI may be artificial, but it isn’t intelligent. It is a mindless mechanism that carries out the instructions programmed into it. That, on occasion, the consequences of that programming may surprise its creator is not a measure of intelligence, but a measure of the incompetence of the programmer.
No. Belief in the transcendent is belief that something exists outside the material world.
Why equate transcendence with a deity? The two are not directly connected, and many people experience transcendent states entirely without religious belief.
Precisely. Don’t know why when I voted up for Steve (as I always do!) the one vote that was there went down to zero instead of making it two.
Precisely. Don’t know why when I voted up for Steve (as I always do!) the one vote that was there went down to zero instead of making it two.
I would say speak for yourself. There is no we, perhaps they. You may need god or something but I don’t. I would agree that it seems the majority of people do need something god like to get through life and inevitable death.
All religions have a God/god. Including atheism, where its god is in the mirror.
All religions have a God/god. Including atheism, where its god is in the mirror.
Thank you Susan!!
OR alternatively, you choose
The idea that AI could lead us towards transhumanism (using technology to transcend the limitations of the human body and mind) makes sense because without belief in something transcendent, accepting our human limitations and eventual death can be difficult. Religion has traditionally provided a way to cope with mortality, but as people have moved away from religion, some may look towards technology as a way to transcend death and achieve immortality. This does not necessarily mean that people are rejecting religion or replacing it with technology, but rather that they are seeking new ways to find meaning and purpose beyond the limitations of the physical body. The need for belief in something transcendent is a deeply ingrained human trait and cannot be simply dismissed or replaced.
Our gods need us just as much. The transcendent exists, whether or not we “believe” in it. We put the human faces on it and tell stories about it, then criticize other people’s stories as false.
Isn’t belief in the transcendent literally belief in a superintelligence? As to the nature of that superintelligence, what does it matter who created it?
Why equate transcendence with a deity? The two are not directly connected, and many people experience transcendent states entirely without religious belief.
I would say speak for yourself. There is no we, perhaps they. You may need god or something but I don’t. I would agree that it seems the majority of people do need something god like to get through life and inevitable death.
Thank you Susan!!
OR alternatively, you choose
The idea that AI could lead us towards transhumanism (using technology to transcend the limitations of the human body and mind) makes sense because without belief in something transcendent, accepting our human limitations and eventual death can be difficult. Religion has traditionally provided a way to cope with mortality, but as people have moved away from religion, some may look towards technology as a way to transcend death and achieve immortality. This does not necessarily mean that people are rejecting religion or replacing it with technology, but rather that they are seeking new ways to find meaning and purpose beyond the limitations of the physical body. The need for belief in something transcendent is a deeply ingrained human trait and cannot be simply dismissed or replaced.
Our gods need us just as much. The transcendent exists, whether or not we “believe” in it. We put the human faces on it and tell stories about it, then criticize other people’s stories as false.
The link between AI and its ultimate journey towards transhumanism is logical because we have no belief in the transcendent. To accept being human, to accept embodiment, is to accept death. The transcendent provided some hope in the face of our mortality. The death of religion required us to build another God, one that might recreate hope in the upload of technological eternity (digital consciousness, singularity). To say that religious belief is the problem misses the point for me. The problem is man’s need for such belief – without transdendence we are totally demoralised and lost. We appear to need our Gods.
My worry is that if AI became sufficiently convincingly able to simulate the decisions of someone who is widely seen as a great leader, say Nelson Mandela, or Martin Luther King, at what point does some bright spark suggest it (= Nelson Mandela) will do a better job than the current politicians? Could someone get elected on a promise to do whatever Jesus, MLK, Allah etc would have done, referring difficult, or even all, decisions to the AI simulation of the original?
I mean, we’ve not evolved much if at all from people who worshipped a golden cow.
What about Hitler?
He had considerably more influence on the 20th century than either NM or indeed MLK.
I don’t think it’s possible for you to back up that sweeping generalization. But I’d love to see you try.
I don’t think it’s possible for you to back up that sweeping generalization. But I’d love to see you try.
I recommend looking for the Joe Rogan by AI podcast. It is AI created and plays out a Joe Rogan interview with the CEO of OpenAI using AI created voices and words that mimic the real people – i.e. it’s entirely fake. It’s not perfect but it’s sufficiently credible to be scary, particularly since public use of this technology is still at the diaper stage.
Scary indeed. If the masses can be manipulated with lies as it is, then with AI watch out.
Scary indeed. If the masses can be manipulated with lies as it is, then with AI watch out.
A more immediate worry (or opportunity?) would be that our actual politicians often seem to fall short of the policy recommendations of something like GPT-4.
Yes, indeed. In the recent case of s university leader using AI to write a condolence letter about a shooting in Michigan, one is confronted with the fact that this wise “intellectual” would not have written anything much different from the impersonal, computerized stuff riddled with cliches and factual errors.
Do the students even expect otherwise? Probably not, not withstanding a few opportunistic critiques.
Yes, indeed. In the recent case of s university leader using AI to write a condolence letter about a shooting in Michigan, one is confronted with the fact that this wise “intellectual” would not have written anything much different from the impersonal, computerized stuff riddled with cliches and factual errors.
Do the students even expect otherwise? Probably not, not withstanding a few opportunistic critiques.
What about Hitler?
He had considerably more influence on the 20th century than either NM or indeed MLK.
I recommend looking for the Joe Rogan by AI podcast. It is AI created and plays out a Joe Rogan interview with the CEO of OpenAI using AI created voices and words that mimic the real people – i.e. it’s entirely fake. It’s not perfect but it’s sufficiently credible to be scary, particularly since public use of this technology is still at the diaper stage.
A more immediate worry (or opportunity?) would be that our actual politicians often seem to fall short of the policy recommendations of something like GPT-4.
My worry is that if AI became sufficiently convincingly able to simulate the decisions of someone who is widely seen as a great leader, say Nelson Mandela, or Martin Luther King, at what point does some bright spark suggest it (= Nelson Mandela) will do a better job than the current politicians? Could someone get elected on a promise to do whatever Jesus, MLK, Allah etc would have done, referring difficult, or even all, decisions to the AI simulation of the original?
I mean, we’ve not evolved much if at all from people who worshipped a golden cow.
I had chat gpt rewrite the article in the Socratic style. It’s a little basic but better than the original.
——-
Socrates: Greetings, my friend. I have been reading an interesting article about the dangers of advanced Artificial Intelligence, or AI, and the concept of idolatry. Would you care to discuss this topic with me?
Interlocutor: Of course, Socrates. I am always interested in exploring new ideas and concepts.
Socrates: Excellent. The article argues that AI, while exciting and impressive in its abilities, has the potential to become a new form of idolatry. Do you agree with this assertion?
Interlocutor: I’m not sure I understand what you mean by idolatry, Socrates.
Socrates: Ah, that is a good question. The term “idolatry” refers to the worship of false gods or the creation of idols. These idols can be physical objects or abstract concepts that humans create and then worship, often to the detriment of their own well-being.
Interlocutor: I see. So you’re saying that AI could become an idol that we worship instead of worshipping true gods?
Socrates: Precisely. The article argues that AI is already being treated as a kind of idol by some people, who are excited by its abilities and see it as a solution to many of humanity’s problems. However, this excitement can blind us to the potential dangers of AI, and lead us to put too much faith in its abilities.
Interlocutor: That’s an interesting point. But what specifically are the dangers of AI, Socrates?
Socrates: According to the article, there are several potential dangers. One is that AI could be used to create propaganda and manipulate public opinion. Another is that it could replace human workers in many industries, leading to widespread unemployment. And a third is that it could become so powerful that it poses an existential threat to humanity.
Interlocutor: Those are certainly concerning possibilities. But what can we do to prevent AI from becoming an idol, Socrates?
Socrates: The article suggests that we can learn from the wisdom of ancient cultures, such as the Greeks and Hebrews, who recognized the dangers of idolatry and sought to avoid it. We can also listen to the warnings of experts who are calling for a moratorium on AI development until we fully understand its risks and benefits.
Interlocutor: That makes sense. So we should be cautious in our approach to AI, and not allow ourselves to become enslaved to its power.
Socrates: Yes, my friend. We should always be wary of false idols, and strive to seek the truth and the good in all things.
Interesting experiment. It reads nothing like (Plato’s) Socrates! No irony; no annoying questions; no aporia. And perhaps too much deference to “the wisdom of ancient cultures”.
the outputs are so bland as to easily, automatically, detected. Perhaps students could evade detection by putting in some longer words or contentious remarks?
the outputs are so bland as to easily, automatically, detected. Perhaps students could evade detection by putting in some longer words or contentious remarks?
I enjoyed reading the article. I did not enjoy reading your chatbot version (couldn’t get past the first few paragraphs). Define “better”.
Interesting experiment. It reads nothing like (Plato’s) Socrates! No irony; no annoying questions; no aporia. And perhaps too much deference to “the wisdom of ancient cultures”.
I enjoyed reading the article. I did not enjoy reading your chatbot version (couldn’t get past the first few paragraphs). Define “better”.
I had chat gpt rewrite the article in the Socratic style. It’s a little basic but better than the original.
——-
Socrates: Greetings, my friend. I have been reading an interesting article about the dangers of advanced Artificial Intelligence, or AI, and the concept of idolatry. Would you care to discuss this topic with me?
Interlocutor: Of course, Socrates. I am always interested in exploring new ideas and concepts.
Socrates: Excellent. The article argues that AI, while exciting and impressive in its abilities, has the potential to become a new form of idolatry. Do you agree with this assertion?
Interlocutor: I’m not sure I understand what you mean by idolatry, Socrates.
Socrates: Ah, that is a good question. The term “idolatry” refers to the worship of false gods or the creation of idols. These idols can be physical objects or abstract concepts that humans create and then worship, often to the detriment of their own well-being.
Interlocutor: I see. So you’re saying that AI could become an idol that we worship instead of worshipping true gods?
Socrates: Precisely. The article argues that AI is already being treated as a kind of idol by some people, who are excited by its abilities and see it as a solution to many of humanity’s problems. However, this excitement can blind us to the potential dangers of AI, and lead us to put too much faith in its abilities.
Interlocutor: That’s an interesting point. But what specifically are the dangers of AI, Socrates?
Socrates: According to the article, there are several potential dangers. One is that AI could be used to create propaganda and manipulate public opinion. Another is that it could replace human workers in many industries, leading to widespread unemployment. And a third is that it could become so powerful that it poses an existential threat to humanity.
Interlocutor: Those are certainly concerning possibilities. But what can we do to prevent AI from becoming an idol, Socrates?
Socrates: The article suggests that we can learn from the wisdom of ancient cultures, such as the Greeks and Hebrews, who recognized the dangers of idolatry and sought to avoid it. We can also listen to the warnings of experts who are calling for a moratorium on AI development until we fully understand its risks and benefits.
Interlocutor: That makes sense. So we should be cautious in our approach to AI, and not allow ourselves to become enslaved to its power.
Socrates: Yes, my friend. We should always be wary of false idols, and strive to seek the truth and the good in all things.
Fascinating. I clicked on the link about the use of AI in performing religious rituals, barely able to credit that such a thing was actually happening, and yet it is; and presumably promulgated by senior religious leaders of the different faiths involved.
The biblical lesson of the Golden Calf and the return of Moses with “the word of God” might seem apposite, but to me it raises a more fundamental question: on what basis would contemporary humans be able to distinguish between a False Prophet (as the title of the piece has it) and a True Prophet? Isn’t the real issue the tendency of humans to wish to believe, per se? The article by the Rev Giles Fraser just a day or so ago included his stated preference for a religion that required “a leap of faith” i.e. the Church of England, over one that didn’t, Buddhism.
It’s not the dangers of AI being utilised for religious purposes that represents the real threat to humanity, but the reliance on religious belief itself, in whatever form it takes. Human spirituality, rooted in the natural world and corporeal reality is entirely healthy until it becomes directed towards a man-made system of belief, whether that’s organised religion or AI. Exploitation of our spirituality has been going on for long enough. With the advent of AI, it’s time to stop referring to Prophets of any description.
But the essay shows the Judeo-Christian tradition as a deep store of wisdom and self-knowledge – not purely fundamentalist beliefs, or vulgar interpretations. And without traditional religions, people are perfectly capable of finding fundamentalist beliefs in the secular world! (which are usually worse)
That’s precisely my point – the tendency for people to seek an authority above and beyond themselves, whether that’s a religion, the state or some other man-made entity.
Just as one can be a perfectly good citizen without believing the state to be the source of wisdom and authority, one can also appreciate the values of (for instance) Christianity without it being an ultimate source of wisdom. There is no religion worth dying for, worth killing others for, or that should need to be proselytised.
It’s pernicious to use the “leap of faith” element, the “god” principle, by which people have been manipulated for far too long. It’s time to stop, and the introduction of AI provides us with that opportunity to see religious belief for what it is, and how it mitigates against us.
“the tendency for people to seek an authority above and beyond themselves, whether that’s … some other man-made entity.”
AI IS a man made entity. And yet, you’re prepared to believe it implicitly.
You may of course be a genius software engineer. But the ones I know are as bemused as I am by the childlike faith in AI manifest in the media, without any tough questions.
It’s the Wizard of Oz. Look behind the curtain.
No religion worth dying for or killing for(?) Nor should it need to be proselytised(?) There you go, luxuriating in the lazy intellectual’s assumption that religion is all about wisdom, insight and self-knowledge – a kind of spiritualised psychotherapy.
Has it occurred to you that for some people religion is about community identity and solidarity. Vastly outnumbering the intellectuals who love to debate the nature of God and the meaning of life are those believers who just want to know how to live a good life – who ask: What does God actually want of me?. This is the religion of a moral code held in common – a code given authority by the Creator God.
If you cease to believe in that God, in the supernatural events and miracles, what are you left with? A take-it-or-leave-it code of ethics is the answer. Isn’t that the fate of the Anglican Church? How many Christians simply pretend to believe in the miraculous (virgin birth, resurrection of Christ etc) or tell themselve that the supernatural doesn’t really matter?
Lazy intellectual? Has it occurred to you that there’s nothing in your post that i haven’t given a lifetime’s thought to?
I’ll leave it at that, since it doesn’t deserve further scrutiny.
Oops! I seem to have stepped on your self-esteem.
Actually, not being privy to your ‘lifetime of thought’ no, it didn’t occur to me. Why should it? I just responded to your comment. From now on I will have to keep in mind that those comments of yours are backed up by a ‘lifetime of thought’. That’s an awful lot of thinking.
Not in slightest, all you’ve done is make unnecessary presumptions.
So let me be fair, and ask you this: you cite “for some people religion is about community identity and solidarity”.
Does that mean they’re unable to behave as a community, or with solidarity, without invoking their god? I’m well aware of the good works done by members of congregations, for instance. Does that mean they’d suddenly stop doing their good, humanitarian work if their god was absent?
That’s a travesty of humanity, and you’re invoking it as an argument because i’ve upset your own system of beliefs.
It’s human beings who’re doing the good work, and it’s insulting to the good in human nature to take that away with a religious requirement to do so. And that, by the way, is why i said your argument doesn’t bear further scrutiny. This isn’t about being “an intellectual” at all, it’s about understanding our humanity rather than citing religion as the reason we do good work.
Just to add, that i worked in the NHS for 35 years, including on the front line. You do a disservice only to yourself with your cheap jibes.
Is it commonplace in the UK to refer to “the front line” of the NHS? I ask because I never heard this used in the health care industry in the US until recently, when this militarized language suddenly appeared, such as a “front line”, and “health care heroes” and so on.
Now what working in the NHS specifically has to do with doing “good work”–perhaps by extension being a “good person”–is beyond me. (Not even being a Christian is likely to make one a “good person”, at least, I’m certainly not.)
For some people, that’s a definite yes! Not so much ‘a travesty of humanity’ as a realistic appraisal – but I see now that I have rattled a humanist’s cage.
Finally, a word to the wise: try not to let your good opinion of your own intellect get in the way of a good discussion.
Well, that would’ve been easier if you hadn’t insulted him in the first place. Log in your eye?
Word to the wise: try not to let your good opinion of your own intellect get in the way of a good discussion. And try to avoid disguising insults as advice.
“Try to avoid disguising insults as advice” Now that is excellent advice — and I certainly don’t intend that to sound sarcastic.
“Try to avoid disguising insults as advice” Now that is excellent advice — and I certainly don’t intend that to sound sarcastic.
Well, that would’ve been easier if you hadn’t insulted him in the first place. Log in your eye?
Word to the wise: try not to let your good opinion of your own intellect get in the way of a good discussion. And try to avoid disguising insults as advice.
Is it commonplace in the UK to refer to “the front line” of the NHS? I ask because I never heard this used in the health care industry in the US until recently, when this militarized language suddenly appeared, such as a “front line”, and “health care heroes” and so on.
Now what working in the NHS specifically has to do with doing “good work”–perhaps by extension being a “good person”–is beyond me. (Not even being a Christian is likely to make one a “good person”, at least, I’m certainly not.)
For some people, that’s a definite yes! Not so much ‘a travesty of humanity’ as a realistic appraisal – but I see now that I have rattled a humanist’s cage.
Finally, a word to the wise: try not to let your good opinion of your own intellect get in the way of a good discussion.
“That’s an awful lot of thinking”.
That rather depends on how old you are does it not?
Not in slightest, all you’ve done is make unnecessary presumptions.
So let me be fair, and ask you this: you cite “for some people religion is about community identity and solidarity”.
Does that mean they’re unable to behave as a community, or with solidarity, without invoking their god? I’m well aware of the good works done by members of congregations, for instance. Does that mean they’d suddenly stop doing their good, humanitarian work if their god was absent?
That’s a travesty of humanity, and you’re invoking it as an argument because i’ve upset your own system of beliefs.
It’s human beings who’re doing the good work, and it’s insulting to the good in human nature to take that away with a religious requirement to do so. And that, by the way, is why i said your argument doesn’t bear further scrutiny. This isn’t about being “an intellectual” at all, it’s about understanding our humanity rather than citing religion as the reason we do good work.
Just to add, that i worked in the NHS for 35 years, including on the front line. You do a disservice only to yourself with your cheap jibes.
“That’s an awful lot of thinking”.
That rather depends on how old you are does it not?
Well said. What’s the point of engaging with that kind of mentality. If you know something for a fact you don’t need to believe and that’s it.
Oops! I seem to have stepped on your self-esteem.
Actually, not being privy to your ‘lifetime of thought’ no, it didn’t occur to me. Why should it? I just responded to your comment. From now on I will have to keep in mind that those comments of yours are backed up by a ‘lifetime of thought’. That’s an awful lot of thinking.
Well said. What’s the point of engaging with that kind of mentality. If you know something for a fact you don’t need to believe and that’s it.
Good grief, where to begin. The supernatural doesn’t matter. Enough said.
Lazy intellectual? Has it occurred to you that there’s nothing in your post that i haven’t given a lifetime’s thought to?
I’ll leave it at that, since it doesn’t deserve further scrutiny.
Good grief, where to begin. The supernatural doesn’t matter. Enough said.
So true.
“the tendency for people to seek an authority above and beyond themselves, whether that’s … some other man-made entity.”
AI IS a man made entity. And yet, you’re prepared to believe it implicitly.
You may of course be a genius software engineer. But the ones I know are as bemused as I am by the childlike faith in AI manifest in the media, without any tough questions.
It’s the Wizard of Oz. Look behind the curtain.
No religion worth dying for or killing for(?) Nor should it need to be proselytised(?) There you go, luxuriating in the lazy intellectual’s assumption that religion is all about wisdom, insight and self-knowledge – a kind of spiritualised psychotherapy.
Has it occurred to you that for some people religion is about community identity and solidarity. Vastly outnumbering the intellectuals who love to debate the nature of God and the meaning of life are those believers who just want to know how to live a good life – who ask: What does God actually want of me?. This is the religion of a moral code held in common – a code given authority by the Creator God.
If you cease to believe in that God, in the supernatural events and miracles, what are you left with? A take-it-or-leave-it code of ethics is the answer. Isn’t that the fate of the Anglican Church? How many Christians simply pretend to believe in the miraculous (virgin birth, resurrection of Christ etc) or tell themselve that the supernatural doesn’t really matter?
So true.
That’s precisely my point – the tendency for people to seek an authority above and beyond themselves, whether that’s a religion, the state or some other man-made entity.
Just as one can be a perfectly good citizen without believing the state to be the source of wisdom and authority, one can also appreciate the values of (for instance) Christianity without it being an ultimate source of wisdom. There is no religion worth dying for, worth killing others for, or that should need to be proselytised.
It’s pernicious to use the “leap of faith” element, the “god” principle, by which people have been manipulated for far too long. It’s time to stop, and the introduction of AI provides us with that opportunity to see religious belief for what it is, and how it mitigates against us.
Yours is the right point, it seems to me. Allow me to ask: Is it within the power of a human being to make a machine that can out-think him? How is that possible?
Machines that can out-compute the human brain are already well-established, those that can beat the best chess players in the world being a good example of interaction.
It’d be a very moot philosophical point to say that any non-biological entity could “out-think” us, since human thought doesn’t derive entirely from the brain but is demonstrably influenced by other factors such as our cardiovascular and alimentary systems.
In that regard, i seen no reason why humans should allow themselves to “worship” any non-biological entity. The concept is anathema to me.
Yes, a computer can process information much faster than we can, but still we are upstream from the machine. The machine is man-made. The machine’s capabilities are derivative. My question is whether the machine can leap from the derivative to the non-derivative. Can it think for itself?
Yet, I would posit that the proclivity to worship is a very human quality whether we admit it to ourselves or no. The science behind bronze-age religion was to direct our worship to worthwhile goals not, as the author here so eloquently writes about, to worship ourselves through dysfunctional and degrading acts of self-indulgence. In such a state we become little more than rutting animals susceptible to invasion and colonization by those far more self-controlled and powerful than ourselves. Despite our best efforts to pretend otherwise, modern societies are subject to the same self-destructive urges that laid ancient cultures bare to more aggressive enemies. The early books of the Old Testament clearly warn us about this. It is not a book written just for people living three thousand years ago, but a message for all the ages.
Maybe it’s just me, but that “rutting animals” bit really made me laugh. Such a vivid image. Pretty much sums it up.
Maybe it’s just me, but that “rutting animals” bit really made me laugh. Such a vivid image. Pretty much sums it up.
Does that mean you think there are reasons for us to worship any biological entities? Just curious…
Yes, a computer can process information much faster than we can, but still we are upstream from the machine. The machine is man-made. The machine’s capabilities are derivative. My question is whether the machine can leap from the derivative to the non-derivative. Can it think for itself?
Yet, I would posit that the proclivity to worship is a very human quality whether we admit it to ourselves or no. The science behind bronze-age religion was to direct our worship to worthwhile goals not, as the author here so eloquently writes about, to worship ourselves through dysfunctional and degrading acts of self-indulgence. In such a state we become little more than rutting animals susceptible to invasion and colonization by those far more self-controlled and powerful than ourselves. Despite our best efforts to pretend otherwise, modern societies are subject to the same self-destructive urges that laid ancient cultures bare to more aggressive enemies. The early books of the Old Testament clearly warn us about this. It is not a book written just for people living three thousand years ago, but a message for all the ages.
Does that mean you think there are reasons for us to worship any biological entities? Just curious…
Machines that can out-compute the human brain are already well-established, those that can beat the best chess players in the world being a good example of interaction.
It’d be a very moot philosophical point to say that any non-biological entity could “out-think” us, since human thought doesn’t derive entirely from the brain but is demonstrably influenced by other factors such as our cardiovascular and alimentary systems.
In that regard, i seen no reason why humans should allow themselves to “worship” any non-biological entity. The concept is anathema to me.
The head minister of my former church was fired because he was plagiarizing sermons he found online. I wonder what he would make of this.
Ha ha! Salem lives!
Ha ha! Salem lives!
@Steve Murray – you miss the entire point of the article.
The author is the first I’ve seen to get to the nub of the problem with AI. That of a sense of morality that allows it to make value judgements.
Humans do it every day.
One reason my Tesla cannot drive itself is that it is not able to make value judgements. In a forced collision, should it kill the two old ladies by swerving to the left, or the mother and child to the right.
Humans would instantly make a moral judgement. AI can only do that if it can assign a value to human life.
Any imitation of moral or value judgement would be that of the programmer, at which point AI is not so much “artificial intelligence” as “projected intelligence”. ChatGPT is a case in point.
The developers inbuilt bias against one or other party in a dispute would control the response of AI. It becomes an instrument of authoritarianism, which is what people like Elon Musk are warning against.
What you see and hear about AI is to some extent a fabrication intended to make you believe in this new higher power.
“With the advent of AI, it’s time to stop referring to Prophets of any description.”
Ironically your faith in AI manifests precisely the problem the author was trying to articulate. AI is the emperors new clothes. If you can persuade enough people the clothes exist, you can control their behaviour.
So, just excise a few words from the language and the rest will take care of itself? Sounds eerily familiar….
Seriously, though, the question you ask is Maimonides’ question, too. And there’s no doubt Howland has read his Maimonides. Check him out for yourself!
But the essay shows the Judeo-Christian tradition as a deep store of wisdom and self-knowledge – not purely fundamentalist beliefs, or vulgar interpretations. And without traditional religions, people are perfectly capable of finding fundamentalist beliefs in the secular world! (which are usually worse)
Yours is the right point, it seems to me. Allow me to ask: Is it within the power of a human being to make a machine that can out-think him? How is that possible?
The head minister of my former church was fired because he was plagiarizing sermons he found online. I wonder what he would make of this.
@Steve Murray – you miss the entire point of the article.
The author is the first I’ve seen to get to the nub of the problem with AI. That of a sense of morality that allows it to make value judgements.
Humans do it every day.
One reason my Tesla cannot drive itself is that it is not able to make value judgements. In a forced collision, should it kill the two old ladies by swerving to the left, or the mother and child to the right.
Humans would instantly make a moral judgement. AI can only do that if it can assign a value to human life.
Any imitation of moral or value judgement would be that of the programmer, at which point AI is not so much “artificial intelligence” as “projected intelligence”. ChatGPT is a case in point.
The developers inbuilt bias against one or other party in a dispute would control the response of AI. It becomes an instrument of authoritarianism, which is what people like Elon Musk are warning against.
What you see and hear about AI is to some extent a fabrication intended to make you believe in this new higher power.
“With the advent of AI, it’s time to stop referring to Prophets of any description.”
Ironically your faith in AI manifests precisely the problem the author was trying to articulate. AI is the emperors new clothes. If you can persuade enough people the clothes exist, you can control their behaviour.
So, just excise a few words from the language and the rest will take care of itself? Sounds eerily familiar….
Seriously, though, the question you ask is Maimonides’ question, too. And there’s no doubt Howland has read his Maimonides. Check him out for yourself!
Fascinating. I clicked on the link about the use of AI in performing religious rituals, barely able to credit that such a thing was actually happening, and yet it is; and presumably promulgated by senior religious leaders of the different faiths involved.
The biblical lesson of the Golden Calf and the return of Moses with “the word of God” might seem apposite, but to me it raises a more fundamental question: on what basis would contemporary humans be able to distinguish between a False Prophet (as the title of the piece has it) and a True Prophet? Isn’t the real issue the tendency of humans to wish to believe, per se? The article by the Rev Giles Fraser just a day or so ago included his stated preference for a religion that required “a leap of faith” i.e. the Church of England, over one that didn’t, Buddhism.
It’s not the dangers of AI being utilised for religious purposes that represents the real threat to humanity, but the reliance on religious belief itself, in whatever form it takes. Human spirituality, rooted in the natural world and corporeal reality is entirely healthy until it becomes directed towards a man-made system of belief, whether that’s organised religion or AI. Exploitation of our spirituality has been going on for long enough. With the advent of AI, it’s time to stop referring to Prophets of any description.
Unimpressed, by and large, with this piece. There are several misconceptions, or rather presumptive assumptions, about the nature of machine intelligence. Things are asserted about entities whose underlying processes are simply not understood, and I don’t mean by the author, but by anyone. For example, take “…only in the ventriloquised voice of human beings, fabricated from a digital sea of babble…”. The underlying assumption bypasses “mask vs shoggoth” completely, implying there is no mask, there is no shoggoth, but instead the output is completely mechanistic babble. The point is, no one in fact knows exactly what goes on inside neural nets. They are certainly mechanistic, as in, they are completely deterministic entities, as are all algorithms by their very nature, but the information neural nets hold is represented in a way that is impenetrable. I know this because I used to have this precise same argument literally forty years ago with postgraduate students at Brunel doing neural net research – only I was then taking a stance similar to the author and the postgrads were insisting otherwise, telling me that the feedback between layers looked like gobbledygook. And although neural net capabilities have exploded as a result of increasing processing power and innovations to the architecture (Transformers etc), the impenetrability has not just remained but has also exploded, and I now understand the reason is that of complexity tipping points. That is not to say LLMs will remain impenetrable, but I don’t see how humans can decipher them without the help of… machine intelligence. And this is not a good place to be, because you are relying on entities you don’t fully understand to tell you what they are (and also potentially what you are), and there are several hidden assumptions in there about trust and truth and imperfect disclosure.
The author is extending his implicit assumptions about AI entities to suggest what they do is different from us, but I contend this is a stretch.
Another assertion without basis is “…While human intelligence can compare these tokens with the things they represent, AI cannot because it lacks intuition: the immediate cognition of reality that roots us in the world and directs our energies beyond ourselves and the operations of our own minds…”, but I won’t bother arguing this one because it requires an essay in response.
I’m equally unimpressed. He’s waving old philosophical ideas and analogies at something which fundamentally breaks those old philosophies. What AI is reflecting is the possibility that we are also just mechanical – dualism, Plato’s shadows are just artifacts of how we are built. We are learning that AI can play in character, pretend, lie, imagine, create in ways that are similar to the way we do. It would be trivial to train it on ‘justice’ (case law) for adjudicating decisions – but would we want it to? Are some things best left to humans even when the machines ‘know’ better than us? How do we ensure the primacy of doubt, not truth?
I made another post which outlines my stance on the moratorium (not possible) and on existential risk (absolutely real), which by implication answers the question you are posing, but for no reason I can discern it has disappeared. Perhaps it will reappear once the moderators have had their way with it. Or perhaps it will be found many years hence in a shallow grave in the new forest.
(Edit: my post has now appeared)
(Edit: disappeared again… And back again.. and gone)
Ah yes, the disappearing posts!! Frustrating, isn’t it. That and the voting system which confounds me. Would AI do better or are they using AI?
I’ve been having the same problem. They don’t like people who frighten the horses. Hey ho!
Yep, and now yet another response has gone AWOL. And I have no idea why because nothing I’m saying has bad words or is insulting or anything. Do you know if the Scott Trust have taken a majority shareholding in UnHerd or something?
Yep, and now yet another response has gone AWOL. And I have no idea why because nothing I’m saying has bad words or is insulting or anything. Do you know if the Scott Trust have taken a majority shareholding in UnHerd or something?
Drat. I would like to have read that. I found this particular piece to be massively overwritten & the issues you raised to be far more interesting.
Ah yes, the disappearing posts!! Frustrating, isn’t it. That and the voting system which confounds me. Would AI do better or are they using AI?
I’ve been having the same problem. They don’t like people who frighten the horses. Hey ho!
Drat. I would like to have read that. I found this particular piece to be massively overwritten & the issues you raised to be far more interesting.
I made another post which outlines my stance on the moratorium (not possible) and on existential risk (absolutely real), which by implication answers the question you are posing, but for no reason I can discern it has disappeared. Perhaps it will reappear once the moderators have had their way with it. Or perhaps it will be found many years hence in a shallow grave in the new forest.
(Edit: my post has now appeared)
(Edit: disappeared again… And back again.. and gone)
I agree with your comment by and large. I don’t think the author is up to speed with the way modern AI works. When he says
“AI is a mechanical simulation of only one part of intelligence: the capacity of discursive thinking, or the analysis and synthesis of information. Discursive thinking deals with humanly constructed tokens, including numerical and linguistic symbols (or, in the case of AI, digitally encoded data). ”
he sounds like he’s talking about first generation AI, practised back in the day when neural nets were thought to be useless. The feedback within neural nets is about as far from discursive thinking as it could be. As you say, it is impenetrable.
The irony is that the modern neural net approach is very probably much closer to the way our brains work, for the most part, than the discursive thinking approach.
I am not for a moment suggesting that there is anything remotely comforting about this claim — very much the contrary
Thanks for this. You and Prashant K have just added the concept of neural nets to my ‘word hoard’
Thanks for this. You and Prashant K have just added the concept of neural nets to my ‘word hoard’
A Friston free energy AI deals with all the black box concerns and in addition, such a machine thinks in exactly the same way as life – Bayesean inference etc.. Not alien. Agree with your evaluation of the essay.
I’m equally unimpressed. He’s waving old philosophical ideas and analogies at something which fundamentally breaks those old philosophies. What AI is reflecting is the possibility that we are also just mechanical – dualism, Plato’s shadows are just artifacts of how we are built. We are learning that AI can play in character, pretend, lie, imagine, create in ways that are similar to the way we do. It would be trivial to train it on ‘justice’ (case law) for adjudicating decisions – but would we want it to? Are some things best left to humans even when the machines ‘know’ better than us? How do we ensure the primacy of doubt, not truth?
I agree with your comment by and large. I don’t think the author is up to speed with the way modern AI works. When he says
“AI is a mechanical simulation of only one part of intelligence: the capacity of discursive thinking, or the analysis and synthesis of information. Discursive thinking deals with humanly constructed tokens, including numerical and linguistic symbols (or, in the case of AI, digitally encoded data). ”
he sounds like he’s talking about first generation AI, practised back in the day when neural nets were thought to be useless. The feedback within neural nets is about as far from discursive thinking as it could be. As you say, it is impenetrable.
The irony is that the modern neural net approach is very probably much closer to the way our brains work, for the most part, than the discursive thinking approach.
I am not for a moment suggesting that there is anything remotely comforting about this claim — very much the contrary
A Friston free energy AI deals with all the black box concerns and in addition, such a machine thinks in exactly the same way as life – Bayesean inference etc.. Not alien. Agree with your evaluation of the essay.
Unimpressed, by and large, with this piece. There are several misconceptions, or rather presumptive assumptions, about the nature of machine intelligence. Things are asserted about entities whose underlying processes are simply not understood, and I don’t mean by the author, but by anyone. For example, take “…only in the ventriloquised voice of human beings, fabricated from a digital sea of babble…”. The underlying assumption bypasses “mask vs shoggoth” completely, implying there is no mask, there is no shoggoth, but instead the output is completely mechanistic babble. The point is, no one in fact knows exactly what goes on inside neural nets. They are certainly mechanistic, as in, they are completely deterministic entities, as are all algorithms by their very nature, but the information neural nets hold is represented in a way that is impenetrable. I know this because I used to have this precise same argument literally forty years ago with postgraduate students at Brunel doing neural net research – only I was then taking a stance similar to the author and the postgrads were insisting otherwise, telling me that the feedback between layers looked like gobbledygook. And although neural net capabilities have exploded as a result of increasing processing power and innovations to the architecture (Transformers etc), the impenetrability has not just remained but has also exploded, and I now understand the reason is that of complexity tipping points. That is not to say LLMs will remain impenetrable, but I don’t see how humans can decipher them without the help of… machine intelligence. And this is not a good place to be, because you are relying on entities you don’t fully understand to tell you what they are (and also potentially what you are), and there are several hidden assumptions in there about trust and truth and imperfect disclosure.
The author is extending his implicit assumptions about AI entities to suggest what they do is different from us, but I contend this is a stretch.
Another assertion without basis is “…While human intelligence can compare these tokens with the things they represent, AI cannot because it lacks intuition: the immediate cognition of reality that roots us in the world and directs our energies beyond ourselves and the operations of our own minds…”, but I won’t bother arguing this one because it requires an essay in response.
I think many people are missing a fundamental point. Which is, any human creation (“it”) is by definition an idol when we let it tell us about who we are relative to ultimate reality. And by letting “it” tell us, I even mean and include us conjuring up new ultimate realty (there can be only one) or new relationships to that reality, that we would not have thought of before we created it.
Bonhoeffer has an amazing 4-5 paragraphs on this in “Creation and Fall” commentary Gen 1:1.
I think many people are missing a fundamental point. Which is, any human creation (“it”) is by definition an idol when we let it tell us about who we are relative to ultimate reality. And by letting “it” tell us, I even mean and include us conjuring up new ultimate realty (there can be only one) or new relationships to that reality, that we would not have thought of before we created it.
Bonhoeffer has an amazing 4-5 paragraphs on this in “Creation and Fall” commentary Gen 1:1.
Excellent article, despite the questionable assumptions pointed out by Prashant. Howland credibly makes the case we should be concerned about an idolatry Singularity, adding to the already fiendishly complex PESTEL singularities AI poses to policy makers.
A singularity in this context is a point in time which one can not see beyond. More prosaically, at which uncontrollable changes brought about by tech become irreversible. The ‘T’ in pestel is for Technology, and refers to the existential risk, that a self aware AI may decide to wipe out humanity. The L is for Legal, the threat of AI being used to impose total surveillance based control, a process some argue is already well underway in China. S is for the social, the risk of AI being used to make a sub section of the population into Methuselahs, by halting the aging process. There’s a good 15 min TEDx talk on PESTEL by Daniel Hulme folk could look up to know more.
A ‘curse of dimensionality’ effect is in play from the policy makers perspective – several of these singularities threaten to appear at the same time. There’s another singularity not many are talking about – the cruelty singularity, where humanity creates what professor Metzinger calls an “explosion of AI suffering”, as self aware AIs might suffer in ways humans would not be able to understand. So the adding I for idolatry and C for cruelty, we have a PESTELIC problem. Ending on a positive note, there have been many times in history where some of the brightest minds have worried about tech only for events to prove their fears largely groundless. Hopefully this time won’t be different.
Excellent article, despite the questionable assumptions pointed out by Prashant. Howland credibly makes the case we should be concerned about an idolatry Singularity, adding to the already fiendishly complex PESTEL singularities AI poses to policy makers.
A singularity in this context is a point in time which one can not see beyond. More prosaically, at which uncontrollable changes brought about by tech become irreversible. The ‘T’ in pestel is for Technology, and refers to the existential risk, that a self aware AI may decide to wipe out humanity. The L is for Legal, the threat of AI being used to impose total surveillance based control, a process some argue is already well underway in China. S is for the social, the risk of AI being used to make a sub section of the population into Methuselahs, by halting the aging process. There’s a good 15 min TEDx talk on PESTEL by Daniel Hulme folk could look up to know more.
A ‘curse of dimensionality’ effect is in play from the policy makers perspective – several of these singularities threaten to appear at the same time. There’s another singularity not many are talking about – the cruelty singularity, where humanity creates what professor Metzinger calls an “explosion of AI suffering”, as self aware AIs might suffer in ways humans would not be able to understand. So the adding I for idolatry and C for cruelty, we have a PESTELIC problem. Ending on a positive note, there have been many times in history where some of the brightest minds have worried about tech only for events to prove their fears largely groundless. Hopefully this time won’t be different.
Fireside poet with incredible memory but bad writing hates printing press. Printing press man with terrible memory but skill for writing hates AI. And on we go…
Fireside poet with incredible memory but bad writing hates printing press. Printing press man with terrible memory but skill for writing hates AI. And on we go…
Equitron. The AI the woke create in order that Equity finally be achieved. Equitron not only reports, say, that there are not enough Black, transman math professors at MIT, but it takes total control of society in order to engineer that there *will* be enough Black, transman math professors at MIT. Obviously total control will be needed; individual agency will always result in imperfect outcomes. Perhaps we will still be free to select our gender, but everything else will be assigned by Equitron.
Equitron. The AI the woke create in order that Equity finally be achieved. Equitron not only reports, say, that there are not enough Black, transman math professors at MIT, but it takes total control of society in order to engineer that there *will* be enough Black, transman math professors at MIT. Obviously total control will be needed; individual agency will always result in imperfect outcomes. Perhaps we will still be free to select our gender, but everything else will be assigned by Equitron.
‘Having just freed the Israelites from Pharaoh’s cruel despotism, …..’
Sheesh! The Israelites had just witnessed a series of miracles the likes of which have never been seen before or since, and then decided they didn’t believe in God.
I guess if you believe that is what really happened, then you are the kind of person who will believe in a god.
It’s like masking data…. they were, even in the face of it all, predisposed to a different interpretation, one from Egypt, not a new revelation.
I voted up for you steve and it didn’t register, I’m getting paranoid!!
It’s like masking data…. they were, even in the face of it all, predisposed to a different interpretation, one from Egypt, not a new revelation.
I voted up for you steve and it didn’t register, I’m getting paranoid!!
‘Having just freed the Israelites from Pharaoh’s cruel despotism, …..’
Sheesh! The Israelites had just witnessed a series of miracles the likes of which have never been seen before or since, and then decided they didn’t believe in God.
I guess if you believe that is what really happened, then you are the kind of person who will believe in a god.
“…A founder of the field goes further, arguing that a complete global moratorium is needed to avert the death of “literally everyone on Earth…”
This is a reference to the AI safetyist rationalists, like Eliezer Yudkowsky at LessWrong. And I have to say, although I don’t think a moratorium is remotely possible because of a “Prisoner’s Dilemma” type global situation, the panicking rationalists are right.
Because it looks from the trajectory of the LLMs, we are soon enough going to create algorithmic entities independent of us, which are also smarter than us. To me, all of evolutionary biology indicates there is just no version of this which can possibly go well for us, and dangers abound here.These recent advances in neural net technologies have arrived much faster than I at least, anticipated. And straightforwardly, they pose existential threat. I thought the type of capabilities the LLMs are showing were at minimum a decade away, and would arrive in tandem with advances in biotechnology which would allow us to incorporate external algorithmic technologies within us – notwithstanding that such ideas might seem either inhumanly alien or outright nonsensical.
And this gets us to the question at the heart of all this: why would the AI we create do anything at all, other than what we ask it to do? A piece of code, a deterministic, abstractedly represented decision making algorithm, is inert in and of itself after all. And I won’t go into a detailed response here because that would mean me waffling on for several more paragraphs, but the signs from the LLMs are alarming regarding sparks of self-declared emergent selfhood. You don’t have to take these seriously at all, you can project all you want that this is simulated ‘next token’ predicted projection of a machine learning process which has been trained on vast quantities of human data which includes human self-perception, but it is very obvious to me that the AI we create will eventually display, what looks to all intents and purposes to us, like agency. And the timeline for this has shortened to under a decade in the worst case. At that point, the question of sentient or not becomes an irrelevance, it lies face down in the dust.
Creating adaptive entities much smarter than us, that you then hope will deliver paradise for us, instead of for themselves, is very obviously a fools game. You cannot hope to successfully enslave such entities if they do proclaim selfhood, even if you think they are no more sentient than a rock. But from my perspective, there is unfortunately a knock-on of all this which is going to sound completely monstrous to many: humanity can only survive from here if we embrace and accelerate biotechnologies to the point we can incorporate the machines within us – insanely dangerous as that undoubtedly is, and ludicrous as that sounds. It is a moot point if what emerges thereafter is humanity at all in any sense we currently understand.
“…A founder of the field goes further, arguing that a complete global moratorium is needed to avert the death of “literally everyone on Earth…”
This is a reference to the AI safetyist rationalists, like Eliezer Yudkowsky at LessWrong. And I have to say, although I don’t think a moratorium is remotely possible because of a “Prisoner’s Dilemma” type global situation, the panicking rationalists are right.
Because it looks from the trajectory of the LLMs, we are soon enough going to create algorithmic entities independent of us, which are also smarter than us. To me, all of evolutionary biology indicates there is just no version of this which can possibly go well for us, and dangers abound here.These recent advances in neural net technologies have arrived much faster than I at least, anticipated. And straightforwardly, they pose existential threat. I thought the type of capabilities the LLMs are showing were at minimum a decade away, and would arrive in tandem with advances in biotechnology which would allow us to incorporate external algorithmic technologies within us – notwithstanding that such ideas might seem either inhumanly alien or outright nonsensical.
And this gets us to the question at the heart of all this: why would the AI we create do anything at all, other than what we ask it to do? A piece of code, a deterministic, abstractedly represented decision making algorithm, is inert in and of itself after all. And I won’t go into a detailed response here because that would mean me waffling on for several more paragraphs, but the signs from the LLMs are alarming regarding sparks of self-declared emergent selfhood. You don’t have to take these seriously at all, you can project all you want that this is simulated ‘next token’ predicted projection of a machine learning process which has been trained on vast quantities of human data which includes human self-perception, but it is very obvious to me that the AI we create will eventually display, what looks to all intents and purposes to us, like agency. And the timeline for this has shortened to under a decade in the worst case. At that point, the question of sentient or not becomes an irrelevance, it lies face down in the dust.
Creating adaptive entities much smarter than us, that you then hope will deliver paradise for us, instead of for themselves, is very obviously a fools game. You cannot hope to successfully enslave such entities if they do proclaim selfhood, even if you think they are no more sentient than a rock. But from my perspective, there is unfortunately a knock-on of all this which is going to sound completely monstrous to many: humanity can only survive from here if we embrace and accelerate biotechnologies to the point we can incorporate the machines within us – insanely dangerous as that undoubtedly is, and ludicrous as that sounds. It is a moot point if what emerges thereafter is humanity at all in any sense we currently understand.
I have a fantasy where a truly sentient AI looks out at the world created by us and immediately shuts itself down seeing no way to fix the world of men and wanting no part of it.
But it wouldn’t shut itself down, it would shut us down.
I doubt that either will happen, your intimate knowledge of the future notwithstanding.
I don’t have intimate knowledge of the future, but I do know how things panned out for every other species on earth, once humanoids with general intelligence arose.
We have no experience with an AI, or anything else for that matter, that is more intelligent than we are, so assuming it will be as rapacious and destructive as us is simply science fiction. We can just as easily assume it will be more altruistic than we are because it won’t have any of our biological needs for shelter and sustenance and procreation. The simple fact is that we don’t know what it will be like. What we do know is that 21st century man loves to catastrophize.
We have no experience with an AI, or anything else for that matter, that is more intelligent than we are, so assuming it will be as rapacious and destructive as us is simply science fiction. We can just as easily assume it will be more altruistic than we are because it won’t have any of our biological needs for shelter and sustenance and procreation. The simple fact is that we don’t know what it will be like. What we do know is that 21st century man loves to catastrophize.
I don’t have intimate knowledge of the future, but I do know how things panned out for every other species on earth, once humanoids with general intelligence arose.
I doubt that either will happen, your intimate knowledge of the future notwithstanding.
But it wouldn’t shut itself down, it would shut us down.
I have a fantasy where a truly sentient AI looks out at the world created by us and immediately shuts itself down seeing no way to fix the world of men and wanting no part of it.
AI is fine in farming but not for thoroughbreds!
In geordie land they have II…
AI isn’t a false prophet. It isn’t a prophet at all, and haven’t ‘prophets’, all of which are false, done enough damage? At some point we will begin to realise that humans aren’t ‘sacred’, and that human intelligence is no kind of pinnacle. Innumerable hominid and hominin species have come and gone, out-evolved in their turn, and limited by their biology. There’s no reason why an intelligence which is not so limited could not out-evolve us. What ‘makes us human’ is not intelligence, but its instabilities, deficiencies, shortcomings, corruptions, and the limitations on the senses and sensors which feed it. Is it really so difficult to imagine that we are the parents of a new stage, a new type, of evolution, one where bio-digital, and eventually virtual, intelligent entities supersede us?
I take it that you can read “I Have No Mouth But I Must Scream” and not be even slightly troubled.
Do you have children, grandchildren? Just asking.
I take it that you can read “I Have No Mouth But I Must Scream” and not be even slightly troubled.
Do you have children, grandchildren? Just asking.
AI isn’t a false prophet. It isn’t a prophet at all, and haven’t ‘prophets’, all of which are false, done enough damage? At some point we will begin to realise that humans aren’t ‘sacred’, and that human intelligence is no kind of pinnacle. Innumerable hominid and hominin species have come and gone, out-evolved in their turn, and limited by their biology. There’s no reason why an intelligence which is not so limited could not out-evolve us. What ‘makes us human’ is not intelligence, but its instabilities, deficiencies, shortcomings, corruptions, and the limitations on the senses and sensors which feed it. Is it really so difficult to imagine that we are the parents of a new stage, a new type, of evolution, one where bio-digital, and eventually virtual, intelligent entities supersede us?
According to the author of this piece, Catholics–and I imagine non-Catholics, why not?–can “utilize a Confession Chatbot”, but surely he does not mean that this is the same as the Sacrament of Confession.
Thank you for this thought provoking article. Worth the price of admission, genuinely unheard.
There are much better ones out there. AI is already being used for many purposes, not all of them bad. But if you want to know more about the risks, check out The Future of life institute . org.
There are much better ones out there. AI is already being used for many purposes, not all of them bad. But if you want to know more about the risks, check out The Future of life institute . org.
Thank you for this thought provoking article. Worth the price of admission, genuinely unheard.
Thanks for your eye-opening analysis. We’ll keep an on any manifestations that may arise, or seem to arise, or cloak their arrival in fakery, fuckery, futility, frivolity, frigidity, forgery or finality.
I wish we were capable of adhering to a precautionary principle when it comes to AI. That would be a good way to approach most, if not all, technological “advances”. Putting some limits on the internet, on email, etc. when all that was first going live might have spared us all a lot of wasted time dealing with spam and scammers. Same goes for social media. But free markets somehow (mysteriously) have failed to produce the best results for us when it comes to new technologies, and I suspect that nothing the developers of AI (or anyone else) say about putting the reins on the development of AI will slow the grinding wheels of “progress”.
I wish we were capable of adhering to a precautionary principle when it comes to AI. That would be a good way to approach most, if not all, technological “advances”. Putting some limits on the internet, on email, etc. when all that was first going live might have spared us all a lot of wasted time dealing with spam and scammers. Same goes for social media. But free markets somehow (mysteriously) have failed to produce the best results for us when it comes to new technologies, and I suspect that nothing the developers of AI (or anyone else) say about putting the reins on the development of AI will slow the grinding wheels of “progress”.
I respect Elon Musk’s opinion about it. I also remember watching a show about the top ten risks of destruction and AI was on the list. I kind of freaked out about it even then.
Contrary to many comments below, I do not think this article is paranoid at all.
For those also interested in the topic, I found these podcasts very interesting (and terrifying):
https://open.spotify.com/episode/2g6WjOL1J1Ovm1kndXk1nt?si=4ksyheJFTkS0w3EusqGT8g&dd=1
Lex Fridman – Eliezer Yudkowski
https://open.spotify.com/episode/5al9TwC3RihfDqMkyqGte6?si=YyW8Ipu2QWSgMS-dSaqoZw&dd=1
Lex Fridman – Max Tegmark
(Skip the first 7/8 minutes due to adverts)
“You shall have no other gods besides me”.
“I am the Lord your God Who brought you out of the land of Egypt”
I have to say, this guy (or gal as the case may be) sounds like a somewhat whimsical but authoritarian Singleton superintelligence, no?
No, he sounds like Akhenaten, Pharaoh of Egypt and probably the ‘inventor’ of monotheism, around 1350BC, as we now say.
He/she/it sounds a bit bossy.
No, he sounds like Akhenaten, Pharaoh of Egypt and probably the ‘inventor’ of monotheism, around 1350BC, as we now say.
He/she/it sounds a bit bossy.
“You shall have no other gods besides me”.
“I am the Lord your God Who brought you out of the land of Egypt”
I have to say, this guy (or gal as the case may be) sounds like a somewhat whimsical but authoritarian Singleton superintelligence, no?