In parts of Thailand, thanks to a confluence of modern technology and traditional beliefs, a peculiar transport service has emerged. The anthropologist Scott Stonington calls them “spirit ambulances”. Some Thai Buddhists feel morally obliged to preserve the life of an ailing mother or father by any means possible — even if there is no realistic hope of recovery, and even if it goes against the parent’s own wishes. Yet these same Buddhists believe that death should occur in the home, in the presence of family. So the dying parent is kept at the hospital until the last moment, receiving the best treatment medicine can provide, before being whisked home in a specially designed vehicle, the spirit ambulance.
In his new book Animals, Robots, Gods, Michigan anthropology professor Webb Keane uses such case studies to tease apart the intuitions of his Western readers. Medicine is only one area where the scientific worldview can blend seamlessly with local customs and perspectives. A Buddhist temple in Japan provides funeral rites for broken robotic dogs. In Taiwan, “one of the most prosperous, highly educated and technologically sophisticated societies in the world”, urban professionals maintain relationships with puppets and religious icons as well as video game avatars. Modern practices and technologies are adopted by cultures with widely differing views of justice, the self and the cosmos. This raises important questions as algorithms and autonomous robots gain a more active, pervasive role in our lives. Whose view of the world will be encoded in these systems, and whose moral principles?
Silicon Valley would have us believe the problem can be outsourced to experts: to professional ethicists, scientific advisors, lawyers and activists. But as Keane emphasises, these authorities tend to come from “a small subset of the tribe WEIRD”, that is, Western, Educated, Industrialised, Rich and Democratic. That they arrogantly claim to speak on behalf of a universal humanity does not make their assumptions any less partial and idiosyncratic. “There is no good reason,” Keane writes, “to take the WEIRD to be an accurate guide to human realities past, present or future.”
To see the force of Keane’s point, just consider Ray Kurzweil, a prominent thinker on AI and principal researcher at one of the world’s most powerful companies, Google. Kurzweil also has a book out, The Singularity Is Nearer, in which he resumes his decades-long project of encouraging us to merge with super-intelligent computing systems. Kurzweil does not think in terms of cultures, traditions or ways of life, only in terms of generic “humans”. What we all want, he asserts, is “greater control over who we can become,” whether by taking drugs to manipulate our brain chemistry or undergoing gender reassignment surgery. So integration with computers must be desirable, for “once our brains are backed up on a more advanced digital substrate, our self-modification powers can be fully realised”. Humans will then live indefinitely, and become “truly responsible for who we are”. This vision is both grandiose and astonishingly narrow-minded. Trapped in his utopian variant of secular materialism, it seems Kurzweil cannot fathom that a good life might be a finite one, that other principles might claim sovereignty over individual desire, or that acceptance of our limitations and fallibility might be integral to happiness.
Kurzweil also endorses the Asilomar AI Principles (drafted at a conference in California, of course), which state that intelligent machines should be “compatible with ideals of human dignity, rights, freedoms, and cultural diversity”. That sounds lovely, but try applying these principles to the Thai Buddhists we started with. Whose dignity and freedom should take precedence — the suffering parent who does not want further treatment, or the sons and daughters who feel compelled to provide it regardless? Should public health systems pay for “painful and expensive trips to the hospital and back”, even if they are “in biomedical terms, pointless”? How should we weigh karma as a factor, given its consequences for the Buddhist cycle of rebirths?
In Animals, Robots, Gods, Keane explores moral life beyond the abstract categories favoured by the WEIRD. If they are to be something more than thought experiments, he points out, moral systems must respond to the conditions that exist in a particular time and place; they must provide “possible ways to live”. In other words, “you simply cannot live out the values of a Carmelite nun without a monastic system, or a Mongolian warrior without a cavalry.” This may sound like the dangerous relativism of an anthropologist. Yet it simply illustrates that, as the philosopher Alasdair MacIntyre argued, the search for moral truth is necessarily an on-going quest: a constant effort to develop our principles in relation to new circumstances. Of course, the principles and circumstances we start with are never ours to choose.
That said, there are commonalities across a range of moral cultures. These include fetishism, where, with varying degrees of seriousness, cultures assign quasi-human (or super-human) capacities to non-human things. This tendency allows for a variety of moral relationships beyond the immediate human sphere, including with animals, deceased or comatose relations, inanimate artefacts and invisible deities. And together with our locally rooted moral instincts, they shape our relationship with technology as well. “When people design, use and respond to new devices,” Keane writes, “they are drawing on existing habits, intuitions and even historical memories.” Such devices always arrive in cultures with their own ways of personifying the impersonal. In Japan, Keane suggests, the legacy of Shintoism makes people more willing to treat robots as living creatures, but also less worried about their moral agency; they assume that robots, like people, will be constrained by social roles and structures.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeVery insightful. We’re still living in an illusion of universality of Western ideas thanks to their phenomenal success over the past 400 years. The fact that absolutely everyone on earth counts the years starting from a conjectured birth of a minor Jewish Rabbi blinds us to the utter weirdness of this situation
Jesus is still Lord and Judeochristian societies are the best
It’s not so weird. What’s the alternative?
Interesting!
AIs of individual cultures as fetishists, containing cultural relevance,content, meaning and morality. By being a fetish they’re endowed with meaning that is real; consciousness. What would the conflict be between cultural AI to cultural AI, and over what: ideology, resources, geographical control? So what would we have? Cyber war instead of the trenches. Same old same old.
And there’s the universal existence of dolls and teddy bears or similar – children treat them as real, a form of learning.