Idolatry is intrinsically paradoxical. Although it enslaves humans to undisciplined desires and pleasing falsehoods, it springs from the will to mastery. It’s essential that the Golden Calf can’t talk, let alone make mountains smoke and skies thunder, as God does at Sinai. It was built to justify self-indulgence, not restrict it.
In the three millennia since the story of the Exodus was first told, though, we’ve grown more sophisticated. Today’s most seductive idols do talk back, although only in the ventriloquised voice of human beings, fabricated from a digital sea of babble.
I’m referring to advanced Artificial Intelligence, which excites emotions from dread to exultation. Last month, an AI model called GPT-4 placed in the 90th percentile on the Uniform Bar Examination, causing lawyers to fear for their jobs. Medical professionals are concerned about AI’s ability to fabricate research papers, and ChatGPT has been implicated in several instances of libel. Meanwhile, agencies such as the Department of Defense and the National Science Foundation are spending taxpayer dollars on developing AI-powered “military-grade” machinery to censor information and “automate the production and dissemination of state propaganda”.
But while the government is all in on AI, some experts have grave doubts. Citing “profound risks to society and humanity”, more than 1,000 tech leaders and researchers called for a six-month pause on the development of AI. A founder of the field goes further, arguing that a complete global moratorium is needed to avert the death of “literally everyone on Earth”. These suggestions pose a terribly serious question that sounds like a joke: What would Xi do?
As we enter the brave new world of AI, we might consider what the Western tradition can teach us about the deepest issues raised by sophisticated machine-learning processes. And the most fundamental of these may be idolatry — a problem that, as the ancient Hebrews and Greeks understood, is political and psychological as well as theological.
The term “idolatry” derives from two Greek words. An eidōlon is an insubstantial form, like the “phantoms of mortals outworn” that Odysseus encounters in Hades. Latreia is a kind of servitude: the state of a hired labourer, or the service to the gods that constitutes worship. Idols, the Jewish Encyclopaedia informs us, “are graven images, unshapen clods, and, being the work of men’s hands, unable to speak, see, hear, smell, eat, grasp, or feel, and powerless either to injure or to benefit”.
God begins his proclamation of the Decalogue in Exodus 20 by prohibiting the veneration of idols: “You shall have no other gods besides me. You shall make you no carved likenesses and no image of what is in the heavens above or what is on the earth below or what is in the waters beneath the earth. You shall not bow to them and you shall not worship them.” Having just freed the Israelites from Pharaoh’s cruel despotism, God warns them against proceeding to enslave themselves to their own handiwork: things endowed, by the psychological mystery of collective self-deception, with seemingly independent power and meaning.
The self-enslavement of idolatry is a theme to which later thinkers return repeatedly. Prisoners in Plato’s Cave are enthralled by the shadows of idols manipulated by unseen puppeteers. Marx discusses the “fetishism of commodities” under capitalism and the alienation of the worker from the products of his labour. And there is no shortage of science-fiction books describing dystopian futures in which human beings are tyrannised by machines.
But why do human beings raise up idols, and bow down to them? The Bible addresses this question when, in Moses’s absence, the rebellious Israelites compel Aaron to “make us gods”:
“And Aaron said to them: ‘Take off the golden rings that are on the ears of your wives, your sons, and your daughters, and bring them to me.’ And all the people took off the golden rings that were on their ears and brought them to Aaron. And he took them from their hand and he fashioned it in a mould and made it into a molten calf. And they said: ‘These are your gods, O Israel, who brought you up from the land of Egypt.’”
The meaning of this passage is indirectly elucidated by the Athenian dramatist Aristophanes, who knew nothing of the Bible, but understood the secret of pagan theogenesis. In The Clouds, a man hoping to wriggle out of his debts goes to the school of Socrates to learn the art of unjust speech. When this deadbeat swears to pay him, Socrates replies: “What sort of gods will you swear by? For first of all, gods are not current coin [nomisma] for us.” The Greek word for coinage is etymologically related to nomos: law, custom or convention. Gods, in other words, are human productions. Athenian silver tetradrachms bore the image of the city’s patron goddess; every community, Socrates implies, mints its own deities, who function as tokens of exchange in its political and theological economy only so long as their value continues to be widely acknowledged.
Aaron’s manufacture of the Golden Calf is just such an act of coining — one that, from the Bible’s perspective, is akin to counterfeiting. Before Moses can bring the divine Word, inscribed on tablets, down from the mountain, the Israelites — a fused, undifferentiated mass, like their melted earrings — speak for the amalgamated god they have so forcefully commissioned by twisting the phrase that introduces the Decalogue: “I am the Lord your God Who brought you out of the land of Egypt”. This act of bad faith only partially obscures the fact that the mob is worshipping its own unshackled collective power.
For at stake in the episode of the Golden Calf is not simply whether the Israelites will live by God’s law, but whether they will succeed in authoring a radically revisionist history of the Exodus. In effectively declaring that they freed themselves, the Israelites erase all debts to God and Moses. And to their slavish minds, freedom means licence; the Israelites’ implicit self-deification authorises the next day’s feasting, drinking, and sexual play. The fitting symbol of the people’s self-exaltation is a mass of gold — a precious metal that, even in Pharaoh’s day, was convertible into the primary objects of appetite, including power, honour, comfort and pleasure.
Today, the World Economic Forum imagines that AI will lead us to a less primitive “utopia”, a 21st-century Promised Land in which people will “spend their time on leisure, creative, and spiritual pursuits”. A safer bet would be drugs and sex robots. Ninety years ago, John Maynard Keynes prophesied, with what looks like eerie accuracy, that machines would make labour obsolete within a century. The prospect filled him with “dread”, because very few people have been educated for leisure.
Judging by the bad behaviour of the wealthy, an “advance guard… spying out the promised land of leisure for the rest of us and pitching their camp there”, Keynes found the outlook “very depressing”. And to those who, freed from labour, looked forward to doing nothing but listening to songs, he replied: “it will only be for those who have to do with the singing that life will be tolerable and how few of us can sing!”
In 2018, an article in Scientific American predicted that advanced AI will “augment our abilities, enhancing our humanness in unprecedented ways”. This Pollyannaish prognosis ignores the fact that all human capacities tend to atrophy in disuse. In particular, AI is inexorably changing the way we think (or don’t). Students now use ChatGPT to do their homework for professors who perhaps rely on it to write their lectures. What makes this absurd scenario amusing is not just the thought of machines talking to machines, but that intellectually lazy people would employ a simulacrum of human intelligence for the sake of mutual deception.
Compared with the natural endowment of human intelligence, the artificial kind is an oxymoron, like “genuine imitation leather”. AI is a mechanical simulation of only one part of intelligence: the capacity of discursive thinking, or the analysis and synthesis of information. Discursive thinking deals with humanly constructed tokens, including numerical and linguistic symbols (or, in the case of AI, digitally encoded data). While human intelligence can compare these tokens with the things they represent, AI cannot because it lacks intuition: the immediate cognition of reality that roots us in the world and directs our energies beyond ourselves and the operations of our own minds. It is intuition, for example, that tells us whether our nearest and dearest are fundamentally worthy of trust. (Needless to say, intuition is fallible, like any other intellectual operation.)
AI has no direct and concrete ties to the actual world, to which it relates only through the medium of binary notation. Self-enclosed in the electronic ether, it dwells nowhere, fears and loves nothing, and has no individual point of view. Does it make sense to grant autonomy and agency to an intelligence that has no natural connection to human needs?
The strength of sophisticated AI is its capacity to sort through massive quantities of data, aggregating and disaggregating discrete bits of information in potentially meaningful ways. This is a promising capability with applications in multiple fields from medicine to transportation. But AI’s productions are artificial regurgitations of material skimmed from vast but shallow pools of digital content and manipulated in ways limited, at least in principle, only by the constraints of programmers.
This can be enormously useful when it comes to detecting patterns of information that would otherwise be invisible to the human eye. Many problems, however, cannot be meaningfully approached by mining Big Data. Asked questions of an ethical or political nature, AI can either refuse to give a definitive response, or it can scour databases for opinions and return what it calculates is the most likely answer. But whether any answer generated in this manner is just or wise can only be a matter of happenstance. This is due, in part, to programming bias, including over or under-weighted data sets. When ChatGPT or Google’s chatbot Bard are asked to evaluate Biden and Trump, for example, their Leftward slant is obvious. (Try asking ChatGPT to write essays comparing each of these presidents to Stalin.)
A more fundamental problem is that machine learning is simply not equipped to sift information according to ill-defined qualitative measures like justice or wisdom. This would be the case even if just or wise perspectives were common on the internet, which they are not. Although no one fully understands how advanced AI works, the old saying applies no less to it than to the simplest computer programmes: “Garbage in, garbage out.”
To the extent that AI remains within the limits of its capabilities, it is because programmers have intentionally constrained its activity. What happens when, for their own all-too-human reasons — the desire for power, honour, and wealth; national pride; or simply the fear of losing their jobs — they remove these constraints? Or when, having well and truly lost the habit of thinking for ourselves, people in general are willing to grant AI authority over matters it is not equipped to handle? Would anyone be surprised if tomorrow someone launched an AI-driven Justice App that promises to settle practical issues of distribution and retribution on the spot? Or if, taking our cue from a well-known software company (NYSE: ORCL), we were someday soon to treat that App — or some other algorithmically-generated distillation of aggregated opinions — as though it were an oracle?
That day is fast approaching, if it is not already here. AI is now substituting for clergy in religious rituals and ceremonies, and Catholics can even utilise a Confession Chatbot. A recent article that sees a use for AI in writing sermons nevertheless observes one limitation on the pastoral employment of machines: “speaking God’s word to a congregation or to an individual requires [personal] relationship.” But AI is incapable of any direct relationship with human beings, including one that is open to the possibility of faith. How could a congregation trust a religious leader, much less a God, that cannot reciprocate this trust?
The Israelite experiment with idolatry ended in disaster. After the episode of the Golden Calf, Moses ordered the Levites to take their swords and purge the camp of wrongdoers. “Slay every man his brother,” he commanded, “and every man his companion, and every man his neighbour.” Should we not expect a similarly bloody consummation when, heeding the utterly irresponsible voice of AI as though it were the Word of God, we once again reach peak idolatry?