In his 1977 triumph, writer Chris Boucher defined Robophobia as a “pathological fear of robots brought about by their lack of body language”. Forty-one years ago – four decades before we became fashionably alarmed about the impact of AI on our lives – Boucher imagined a world utterly dependent on robots; a humanoid slave race, living ‘among’ us, but not ‘of’ us.
In Boucher’s story, murders are taking place on a mining craft — a Christie-esque setting, claustrophobic, closed-off, so the murderer must be among the assembled and isolated cast, who are being bumped off, one by one. One character, Poul, forced to confront the hitherto literally unimaginable concept — that the for-granted robot crew, which numerically dwarfs the humans on board by orders of magnitude, may contain the culprit — has a complete nervous breakdown.
He suddenly sees the robots, not as an indivisible, near-invisible mass of hidden-in- plain-sight workers, but as… people. Almost-people: Boucher’s robots are humanoid, stylised, calm: they look like ‘us’, but they are (clearly) also not-us. It’s therefore narratively convincing when Poul begins to scream about “the walking dead”, and collapses under the weight of his dread.
That this 1977 masterpiece was for an episode in a Tom Baker season of ‘Doctor Who’ partly explains the programme’s longevity. Every so often it produces drama that qualifies as art: science fiction, on the surface about some silly made-up society a billion light years from Earth and impossibly far in the future, but which yet compels the viewer to re-examine things we take for granted: what does it take to be human? What are the minimal requirements for entry?
Before you scoff at my childhood fondness for a science fiction murder story with ‘impossible’ robots and an invented phobia about them, consider this highly contemporary and entirely non-fictional scientific investigation, by Horstmann and co-workers, into what humans do when confronted with something that, while clearly un-human, exhibits human-style behaviour.
In the experiment, participants were randomised to a set of tasks with an obviously toy robot under a number of different conditions (sometimes the robot was chatty, sometimes formal, that sort of thing). After a lot of deceptive faffing, the participants were asked to switch the machine off. Sometimes the robot would “object” to this (“No! Please do not switch me off! I am scared that it will not brighten up again!”), sometimes it wouldn’t utter a damn thing.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe