The potential risks to mismanaging Artificial Intelligence (AI) are phenomenal.
Estimates of UK jobs that could be replaced by AI and related technologies over the next two decades tend to range from 22% to 40%.
We have already witnessed how data analytics can be malignly used in political campaigns. This capacity will become more sophisticated, possibly at the expense of the democratic process itself.
Possibly even more potent is the recognition software being trialled in marketing to detect the efficacy of advertising by judging facial expressions. It suggests business has the potential to reach into our lives in ways Orwell imagined a totalitarian state would do.
More generally, we have seen the filter bubble effect on civic and social life. Social media is feeding us information which aligns with our preconceived notions of the world, and closes us off from challenging information and argument.
Yet the challenges facing the country today appear inversely related to the capacity of politicians and policymakers to discuss, let alone resolve them.
In the face of escalating authoritarian populism, for example, where is the political diagnosis and response? Where is the defence of liberal democracy?
Or, in our post-EU referendum world, where do we consider the issues and feelings that ushered in the result, not just the technical aspects of Brexit?
Arguably, politics has lost its ethical grip; reduced to various forms of technocratic administration. Today’s populist uprisings reflect a backlash against such managerialism.
To renew public confidence requires a different conversation. One that addresses moral and cultural questions regarding the lives we wish to live, and how the present disparity between that ideal and reality can find political expression.
The same tension surrounds our limited discussion of robotics and AI. These have the potential to affect all policy fields, from education to the labour market, from policing to health and social care.
Yet current political thinking is reactive – geared only towards ensuring Britain is at the forefront of technological progress. On one level, this utilitarian stance is understandable. But should we not begin by discussing the role technology should and should not play in our lives? A conversation far beyond questions of economic utility.
Restricting public debate to policy rather than ethical concerns is especially problematic when considering AI, for politicians lack technical and scientific expertise in these areas.
Being unable to evaluate the claims of developers means that both politicians and the public are prone to being swayed by either apocalyptic or techno-utopian narratives.
For example, many technologists adhere to ‘techno-solutionism’, the idea that all ‘problems’ which humanity faces can be ‘solved’ using technology. This confidence often goes hand in hand with an innate libertarianism: as the role of technology expands, so should the role of the state contract.
Then there are those who approach these issues from a transhumanist position. This asserts that technological change creates the opportunity to transcend the human condition and that this is to be celebrated. Resistance is deemed parochial or nostalgic. In fact, transhumanism is often tagged as the ‘modern eugenics’.
What happens when transhumanist thinking informs the technologists themselves? Nick Bostrom, for example, is both director of H+ – an international transhumanist organisation – and the Future of Life Institute at Oxford University, which regularly produces policy recommendations for government. That is surely a conflict.
Policymakers need to avoid being captivated by the promise of technological progress without an appreciation of the philosophical assumptions that might inform this thinking.
This is applicable to libertarian and transhumanist thinking on both the radical Left and Right. These are deep waters and should be dominating political debate. Yet discussion is virtually non-existent.
We have been warned: the House of Lords’ Report on AI said, “the most challenging point relating to AI and democracy is the lack of choice that is offered to the population at large about the adoption of technology. It is, to say the least, undemocratic”.
But as it stands, the policy proposals to meet these challenges are shockingly weak: that developers undergo training in ethics as part of their computer science degrees, that companies ensure their workforces are diverse and that individuals made redundant, perhaps repeatedly, by AI are enabled to train for a new career.
Broader in scale, Universal Basic Income is a more radical proposal floated to ensure that those who lose their jobs are not made destitute. Yet the State would take on a phenomenal welfare burden alongside a shrinking tax take.
To fill the deficit people like Bill Gates propose a robot tax. But trying to define a robot would be a legal and regulatory nightmare. When policy is solely reactive, bending to suit technologists’ goals, it can fast become incoherent.
Instead we should be returning to first principles, and discussing the value we place on work, freedom, privacy, community and justice. In short, about what we want society to look like.
If we do not, policymakers run the risk of slipping into techno-solutionism, elevating technological and economic advancement above an appreciation of how we want to live.