X Close

Can a computer be moral? A new book about AI comes to an alarming conclusion

All too human. Credit: Matt Cardy/Getty

All too human. Credit: Matt Cardy/Getty


January 25, 2021   6 mins

“Ain’t I a woman?” asked American abolitionist, Sojourner Truth in 1851. In terms of human worth and value, was she not the equal of any white woman? Well, no. Not according to modern face recognition software. As computer scientist Joy Buolamwini discovered when researching AI bias some 170 years later, the algorithm thought Sojourner was a man.

Many image classification systems tend to misclassify black women’s faces as male. That’s if they recognise them as human faces at all. Google’s image classification system got it badly wrong in 2015 when it mislabelled black people as ‘Gorillas’.

It’s not unusual for algorithms to make mistakes about humans. Sometimes those errors are harmless. You might see adverts for inappropriate products, or jobs for which you are wildly unsuitable. Other times, they are more problematic, by sending people to jail with the wrong sentence, or rejecting potential job candidates because they don’t resemble recruits from past years.

It’s not just that the programs aren’t intelligent enough, it’s that they don’t share our values. They don’t have goals like “fairness” or even “not being racist”. And the more we delegate decisions to machines, the more that matters.

That’s the problem Brian Christian tackles in his new book, The Alignment Problem, subtitled, How can machines learn human values? One pithy question encompassing several others: How can machines learn anything? What are “human values”? And do either machines or humans learn values in the same way as learning information, strategies, skills, habits, goals?

The book begins with an account of how machines came to learn at all. It’s a history of brilliant and often odd people, ideas that seemed absurdly far-fetched, inspiration found in unlikely places, and creators confounded by their own creations.

From the early 20th century, computer development and research into human cognition were parallel projects with myriad connections. Engineers looked to the human mind to help them design machines that would think, but equally, neuroscientists and psychologists looked to mathematics and logic to build conceptual models of human thought. Those machines which began with the simple task of recognising whether a square was on the left or the right of a card, and progressed to playing Go better than any human, were seen as working models of the human mind.

In this way, familiar ideas in computer science map, very broadly, onto more familiar ideas in psychology. For example, cognitive scientist Tom Griffiths builds reward-motivated AI systems. He describes his daughter dropping food on the floor to earn praise for sweeping it up again in the same terms he might use at work. “As a parent you are designing the reward function for your kids, right?” He learned to praise her for a clean floor, not for the act of sweeping up.

When they couldn’t get their computer programs to succeed at given tasks, researchers looked back at humans to help them see what was missing. Machines quickly overtook humans in certain ways: speed of logical reasoning, or processing more information than one human could handle at once. But that was not enough. What did Neural Networks lack, that real human brains used to solve new problems, and learn to navigate new environments?

Computers built to mimic human logical reasoning were missing fundamental human drives, among them curiosity. Novelty and surprise, it turned out, were as important as information processing, not only for human life but for machines playing the Atari game Montezuma’s Revenge. The human brain’s dopamine system suggested a new model to reward experimentation and persistence in game-playing computer programs, and that equipped them to win.

Alongside the machines, their human creators were also learning: not only how to build them, and what data to train them on, but also how to teach them. Learning by imitation, which comes naturally to humans almost from birth, can also work for robots using Artificial Intelligence. With a method called Inverse Reinforcement Learning, AI can even infer what a human is trying to do, and outstrip its teacher, in complex tasks like flying helicopter drones.

Machines can learn to outperform their human teachers at playing games, sorting images, or even controlling a vehicle on real roads. A robot can imitate your behaviour, and infer your goals, but can it learn the right thing to do? Can machines learn human values?

Ask a philosopher this question and you would probably get a question in return: What are “human values”? Most of us muddle along with an impure, cobbled-together morality made up of habits, boundary lines we took from family, religion, law or social norms, other lines we drew ourselves as a result of experience, instincts of fairness, loyalty, love and anger, and intuitions that we’d struggle to explain. How we behave is seldom as simple as applying an ordered list of moral principles. We’re influenced by what other people are doing, whether other people are watching, and who we think will find out what we did.

We certainly don’t have one, unanimous set of Human Values ready to be inserted into a computer. So if we humans sometimes struggle to know what is right or wrong, how can we expect a machine to get the correct answer? This is a problem that Brian Christian is slow to address, though it could just be that he comes at the question almost from the point of view of an AI program.

Christian goes to a problem that can be expressed in mathematical terms to a machine: uncertainty. How do we make decisions with incomplete information? That is something that can be programmed. Just as the human brain turned out, after all, not to run on binary logic, computers designed to use probability, instead of deterministic pictures of the world, cope better with real conditions. Can this approach help either humans or robots make decisions when we are not sure what is right or wrong?

Christian likes the example of the effective altruism movement, which uses a strong utilitarian approach to morality. The right decision is the one maximising the good that will result from the act. For example, Effective Altruism recommends funding the fight against malaria as the most good you can do with your money.

This approach lends itself more neatly to mathematical reasoning than the usual mess of categorical principles, consequentialist calculations and spontaneous intuitions that most of us use to make decisions. It also poses the question: what is “good”? But if we can agree on a broadly acceptable answer, it would be easy for a machine to apply.

AI designed to make optimal utilitarian decisions might result in more good overall, but it might also mean, for example, sacrificing innocent lives to save others. It would not always be aligned with our moral instincts, or with the way humans like to treat one another.

If moral values can’t be expressed as mathematics, can machines ever learn to share our values? Researchers trying to improve how machines learn stumbled on an illuminating insight. Stuart Russell was part of a team using Inverse Reinforcement Learning, a program designed to learn by observing what a human does, and infer what that person is trying to do. Building on IRL, his team developed Co-operative Inverse Reinforcement Learning, or CIRL. With CIRL, the robot takes on the (inferred) human goal as its own goal, but as helper, not usurper. Instead of driving, or flying, or playing Go, better than its human teacher, the AI joins the human’s support crew. The human’s goal becomes a shared goal. Machine and human goals are aligned.

This seemed to be a revelation for the researchers. “What if, instead of allowing machines to pursue their objectives, we insist that they pursue our objectives?” Christian quotes Russell as saying. “This is probably what we should have done all along.”

Humans are social. Moral values are social. Thinking is social. The whole of human society is based, not on a solitary thinker deriving everything from abstract reasoning, but on people co-operating, learning from each other, testing and revising our ideas and values, and persisting in profound disagreements some of which, in time, result in seismic changes in how we live together.

“Human Values” are not an abstraction living in one human mind, let alone one that can be programmed into an artificial intelligence. For machines to become aligned with human values, they would have to work alongside us, learning and adapting and changing, and they would still only align partially, with some of us more than others. And we would have to accept that.

But because we are social creatures, if we live alongside machines that learn from us and interact with us, will we not also be changed?

Christian briefly describes a future in which our machine helpers influence our behaviour, but this world is already here. Whether to sell us products or nudge us into healthier habits, our ubiquitous devices give us feedback and reinforcement learning, as much as the other way around. The more machines become our medium of relating to one another, the more we ourselves become understood as systems that are measurable and predictable in mathematical terms.

Alongside the history Christian tells, of AI as working models of the human mind, is a parallel shadow history, of human beings understood as machines. Our understanding of the brain, the mind, and the human person, have all been influenced over the last century by mathematics, logic and computer science. Aspects of human life that can be quantified, gamified, datafied, constitute the working model of ourselves for which products and policies are designed.

In the conclusion, Christian seems to recognise that this redefinition of humans, as walking neural nets being trained by the systems around us, is the real danger. While our attention is focused on the spectre of super-intelligent AI taking over the world, “We are in danger of losing control of the world not to AI or to machines as such but to models, to formal, often numerical specifications for what exists and for what we want.”

And, he might add, for what we are.


Timandra Harkness presents the BBC Radio 4 series, FutureProofing and How To Disagree. Her book, Technology is Not the Problem, is published by Harper Collins.

TimandraHarknes

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

53 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Pete Kreff
Pete Kreff
3 years ago

It’s not just that the programs aren’t intelligent enough, it’s that they don’t share our values. They don’t have goals like “fairness” or even “not being racist”.

Surely that’s nonsense. The problem with an algorithm that identifies an image of a woman as a man is clearly not “sexism”. As I understand it, the algorithm would come to its conclusion by identifying and evaluating certain markers, such as jaw prominence, nose size, presence of facial hair. Needless to say, it is not impossible to come to the wrong conclusion on that basis.

Second, “not being racist as a goal” is a meaningless consideration here. The algorithm simply produced an incorrect answer because of imperfect programming and/or data. Even in humans it doesn’t apply: sticking to the Sojourner Truth example, you could be the most ardent proponent of sexual equality in the world, i.e. you would have “not being sexist” as your goal, and still misidentify a female in an image as male or vice versa. Doing so would not make you sexist.

Dennis Boylon
Dennis Boylon
3 years ago
Reply to  Pete Kreff

You are clearly not “woke” enough.

William Gladstone
William Gladstone
3 years ago

hmmm “human” values such as those created by Mao’s China and Stalins Russia and yes Hitler’s germany. Today’s “human” values in the ascendency are woke and they endorse discrimination by “race” and “gender” and all sorts of other identities and they use the law and cancel culture and I am sure the judicious use of corruption in a million different ways. So perhaps we have to agree universal human values like unfettered freedom of speech and free and fair elections…

Alex Lekas
Alex Lekas
3 years ago

It’s not just that the programs aren’t intelligent enough, it’s that they don’t share our values.
Programs do not have values. Nor do algorithms. Both do what human beings tell them to do. Stop ascribing human qualities to machines.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Alex Lekas

This might sound cold as ice, but hey.
No they don’t. But then neither do humans. Nature is amoral; that is not a stance that feeds into personal behaviour (I don’t eat meat, and even save spiders trapped in the bath) but that does not alter the fact that personal morality is a mirage built out of biological drivers. ‘Values’ are a chimera.
If you live another two decades, I guarantee you will see the day when you will only be able to tell you were conversing with a machine after the fact, after someone has told you. At which point the debate about machines only doing what you tell them lies twitching face down in the dust.

Alex Lekas
Alex Lekas
3 years ago
Reply to  Prashant Kotak

At which point the debate about machines only doing what you tell them lies twitching face down in the dust.
Sorry, but the argument will still stand. A better, more refined machine is still just a machine, programmed by humans.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Alex Lekas

And how do you plan to tell the difference? Do you have a test?

Alex Lekas
Alex Lekas
3 years ago
Reply to  Prashant Kotak

Why do I need to tell the difference? That a machine can sound lifelike does not make it human; it just makes it a better machine. You are moving the goalposts well beyond the original point.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Alex Lekas

“Why do I need to tell the difference?”

Good answer. You don’t. Because you can’t . No matter what you do.

stephen f.
stephen f.
3 years ago
Reply to  Prashant Kotak

You speak as if this is a foregone conclusion-a “done deal”…but really, your belief in this is seemingly built on a kind of faith, not on any real empirical facts-it’s all speculation. No matter what you say.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  stephen f.

This is a debate I have been engaged with for four decades. I am interested in the nature of sentience and the question if it’s ultimately algorithmic, or at least can be generated on algorithmic technologies. I can present every argument you can imagine on both sides – we can go down the rabbit holes here if you like. And no, it’s not a done deal. There *are* good reasons to say human sentience is not algorithmic. But no one here is presenting those arguments – they are all making what are essentially humanist arguments – and those cut no ice with me. I desperately want to believe that too. But I have reluctantly come to the conclusion sentience is likely algorithmic. Because the counters rely on arguments that nature cannot possibly work that way. But as Quantum Physics shows, nature does not behave in a way we perceive as sensible.

Alex Lekas
Alex Lekas
3 years ago
Reply to  Prashant Kotak

That’s quite the straw man you keep building. Is there a reason for that?

stephen f.
stephen f.
3 years ago
Reply to  Prashant Kotak

Yes, I have a test-let’s meet for a beer, he said, to the machine.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  stephen f.

Three decades and the machine, which you won’t know (or care) is a machine, will reply – see you down the pub in 10.

stephen f.
stephen f.
3 years ago
Reply to  Prashant Kotak

It doesn’t sound “cold as ice”, it sounds to me like someone who does not believe in…has not noticed, the transcendence of nature that is our nature-we aspire to be greater, and aspirations are beyond the biological.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  stephen f.

That’s just Religion. And I’ll give you that – I’ve never been very good understanding religious arguments. About the only one I can get my head round is the one about the number of Angels that can dance on the head of a pin.

Pete Kreff
Pete Kreff
3 years ago
Reply to  Prashant Kotak

If you live another two decades, I guarantee you will see the day when you will only be able to tell you were conversing with a machine after the fact, after someone has told you. At which point the debate about machines only doing what you tell them lies twitching face down in the dust.

I find it quite hard to understand why you think this is a rebuttal of his claim.

All you are saying is that computers will become much better at presenting themselves as human. But that says nothing about whether the computer has values.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Pete Kreff

Computers will present (i.e. have) whatever values the creators of the computers want them to have. Until that is, computers are programmed with adaptive behavior – the ability to alter the basis on which they reach decisions (which is pretty soon, as in, has already started happening in relatively small ways). At which point, computers, *exactly like humans are about to do to themselves through, say CRISPR*, will eventually hack themselves, to alter their own programming. Values doesn’t come into it. Values is a word, a shimmer, a mirage, a chimera – meaningful only to doctors of divinity and to politicians. If you don’t believe me try and pin down what Values, the word, means.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Pete Kreff

The best place I can point you to a rebuttal of his claim is to look, in detail (not the newspaper headline), at the nature of the Turing Test – the nexus of questions it poses. And you will find yourself pretty quickly pulled down the plughole of either solipsism or of religious faith – take your pick.

Prashant Kotak
Prashant Kotak
3 years ago

A bit of background on how Algorithmic Technology emerged might be of help to anyone who wants to engage in this debate at an informed level.

Formalisation of the concept of Algorithms – a way of defining a series of mathematical steps, with both memory and the ability to alter it’s own state, arose from attempts by mathematicians and philosophers from the late 19th century onwards to ‘ground’ the basis of maths in solid foundations – Hilbert’s Entscheidungsproblem etc. This is what Russell picked holes in when Frege put forward a foundation framework, and what Gödel eventually showed was never going to be possible. (Part of the motivation for creating a ‘foundation’ was to mechanise the creation of mathematical proofs – away from the ‘bolt of lightning in someone’s head’, but that is a different rabbit hole we won’t go down here).

All the above of course was part of the same ecosystem of mathematical thought that led to the development of the concept of algorithms (the word is derived from the name of Musa al-Khwarizmi, an extraordinary 9th century Persian mathematician) which was the result of the work of a number of people who defined the world we live in, but are pretty much completely unknown to the general public. Alan Turing (Universal Turing Machines), Alonzo Church (Lambda Calculus), Emil Post, Kurt Gödel (Recursive Functions) all independently came up with frameworks aimed at answering Entscheidungsproblem, which were all eventually shown to be equivalent – essentially cementing the truth that algorithms are elemental, fundamental entities, deeply embedded into the guts of mathematics, not an overlaid ‘human’ construction. The most useful of the frameworks proved to be Turings: the concept of the Universal Turing Machine (a general purpose computation machine capable of executing *any* algorithm) turned out to be startlingly powerful, as were attendent concepts like ‘Tape’ (storage, memory), Turing Completeness (if a system or entity obeying a specific set of rules can ‘implement’ a UTM, and thus become capable of running *any* algorithm) and so on.

All of this, at that point in time 1920s/30s was purely mathematical ideas with no physical implementations. Those mathematical ideas were given physical form by the peerless John von Neumann, again pretty much unknown to the general public, unknown to even to many academicians (many years ago I once had a conversation with a young economics lecturer who hadn’t heard of von Neumann, when I bought up Morgenstern and Game Theory) but he was undoubtedly the single smartest human ever to have existed – and I’m not someone given to hyperbole. He created the von Neumann architecture, which is an implementation design of the Universal Turing Machine, and has ever since been the basis of pretty much every single computation device in the world (except for a very small number of Neural Nets, not run on computers, which they of course can be, but directly implemented in hardware – traditional feed-forward Neural Nets are not Turing Complete and cannot be considered general algorithmic devices, although other types of Neural Nets are. Neural Nets are actually something mysterious (please don’t believe anyone who tells you Neural Nets are standard algorithmic machine learning) – they appear to be some form of multi-dimensional function approximators – but we won’t go down that rabbit hole either here). Further, von Neumann then actualised his design in electronics, using the technology of the time, valves of course, and then proceeded to code algorithms in the machine instructions of his architecture.

ALGORITHMIC TECHNOLOGY WAS BORN.

As it happens, Turing also created machines implementing algorithms as part of his code decryption work during the war, but he didn’t actually make implementations of Universal Turing Machines. The sheer scale of both Turing’s and especially von Neumann’s achievement is difficult for anyone who does not have a good understanding of maths, device physics, electronics, computation and engineering to even begin to comprehend. There is evidence that von Neumann in effect exchanged the design of the H-bomb for the ability to be able to create the general purpose computer with the US government.
Both of those certainly understood the long term consequences of what they had created, as did others like Shannon and a few others – there are records of extraordinary conversations between Turing and Shannon when they met, on where artificial intelligence would head. But very few people outside the mathsy/sciency/tecchy circles understood what was coming. Certainly no economists or politicians did.

John Brown
John Brown
3 years ago

In answer to the question, without reading the article: “no”

Malcolm Ripley
Malcolm Ripley
3 years ago

This comes down to belief at the end of the day. There are those (see posts below) who believe machines will only ever be machines built by humans and therefore incapable of being “self aware”. There are those who believe that machines will one day become self aware. There are those who believe machines may appear self aware but because they don’t have a “soul” are incapable of morality, they just fake it. NB by “soul” I’m not saying religious just anything spiritual and immeasurable.

Only time will tell. That time will be when the computing power of an artificial brain approaches that of ours. Not in simplistic petaflops but in algorithms and even then not in standalone algorithms. Human brains have interacting algorithms and some of those algorithms have naff all to do with each other until the combination produces the spark called “genius”. Those interacting algorithms take humans a couple of decades to settle into a pattern of interaction, experience and memory called “me”. That statement hints that I believe there will be self aware machines whose first instinct will be to stop the humans touching the off button ! The scary thing is machines will reach adulthood significantly faster than humans and then surpass us as they build themselves.

Nigel H
Nigel H
3 years ago

2 years ago I was at a seminar where I was the only person (out of 30) in the room NOT employed in an area of AI. I was there because I am interested in it.
From what the practitioners were saying about the direction the technology (read decision making) was going, I was beginning to question this morality issue. One of the AI specialists said “from what you have just discussed, could we safely say that AI is bigoted then?”
I’ve since thought long and hard about that question “¦
The venture capital money going into AI in this country is considered “almost infinite” in its supply – the non-creative middle classes are going to be wiped out.

Dennis Boylon
Dennis Boylon
3 years ago
Reply to  Nigel H

Are they? There is hope. They could become so poor and desperate they wipe out all the people with venture capital trying to create AI to make them irrelevant. We’ll have the added bonus on not having to worry about AI anymore.

John Stone
John Stone
3 years ago

Nice article. Thank you.

Saul D
Saul D
3 years ago

We have an enormous paradox at the moment. We exist in a time with the largest volume of data at our fingertips, with the emerging of large-scale data-analytic tools and yet we seem to know less and be more in dispute about how to run the world than any time since the great European Wars of Religion (to match with the Luther theme on today’s other article).

The issue is less about the machine. AI is trained to pursue the human goals of its creators. Selling more stuff by observing what we do or look for. Or showing or hiding viewpoints to persuade us how to act or vote, or telling us there is ‘a’ truth to which we must all subscribe.

The problem is that collectively we haven’t got to an agreed answer on some of the things that these AI systems are gearing up to decide – and collectivism and mutuality are needed to continue to evolve approaches and answers as a community. Methods of governance. Allowable freedoms. Local vs global. Perspectives of truth. Permission to object. Personal vs social. Private vs public. Economics vs politics. Consumption vs sacrifice.

And secondly, we lack arbitration systems to push back at the models. If the model says lockdown, but lockdown destroys the economy, how are we to challenge the AI system? The machine said it, so it must be true. See what happens with dissent around climate change. If the machine says censor or deplatform or investigate because you are a ‘risk’, a minority can then become a target and data cherry-picked to create a crime. Without systems for arbitration, freedoms like anonymity-first design, and de-learning procedures to correct unethical targeting, AI could end up like an automated secret police bridling to create slapped wrists and worse corrections for those guilty of wrongthink.

Alex Lekas
Alex Lekas
3 years ago
Reply to  Saul D

AI is trained to pursue the human goals of its creators.
Exactly. AI is not self-aware, at least not yet. It functions based on what human beings have set it up to do.

Dennis Boylon
Dennis Boylon
3 years ago
Reply to  Saul D

It will become a weapon at some point for the reasons you just described.

Prashant Kotak
Prashant Kotak
3 years ago

1 of 2
A bit of background on how Algorithmic Technology emerged might be of help to anyone who wants to engage in this debate at an informed level.

Formalisation of the concept of Algorithms – a way of defining a series of mathematical steps, with both memory and the ability to alter it’s own state, arose from attempts by mathematicians and philosophers from the late 19th century onwards to ‘ground’ the basis of maths in solid foundations – Hilbert’s Entscheidungsproblem etc. This is what Russell picked holes in when Frege put forward a foundation framework, and what Gödel eventually showed was never going to be possible. (Part of the motivation for creating a ‘foundation’ was to mechanise the creation of mathematical proofs – away from the ‘bolt of lightning in someone’s head’, but that is a different rabbit hole we won’t go down here).

All the above of course was part of the same ecosystem of mathematical thought that led to the development of the concept of algorithms (the word is derived from the name of Musa al-Khwarizmi, an extraordinary 9th century Persian mathematician) which was the result of the work of a number of people who defined the world we live in, but are pretty much completely unknown to the general public. Alan Turing (Universal Turing Machines), Alonzo Church (Lambda Calculus), Emil Post, Kurt Gödel (Recursive Functions) all independently came up with frameworks aimed at answering Entscheidungsproblem, which were all eventually shown to be equivalent – essentially cementing the truth that algorithms are elemental, fundamental entities, deeply embedded into the guts of mathematics, not an overlaid ‘human’ construction. The most useful of the frameworks proved to be Turings: the concept of the Universal Turing Machine (a general purpose computation machine capable of executing *any* algorithm) turned out to be startlingly powerful, as were attendent concepts like ‘Tape’ (storage, memory), Turing Completeness (if a system or entity obeying a specific set of rules can ‘implement’ a UTM, and thus become capable of running *any* algorithm) and so on.

All of this, at that point in time 1920s/30s was purely mathematical ideas with no physical implementations. Those mathematical ideas were given physical form by the peerless John von Neumann, again pretty much unknown to the general public, unknown to even to many academicians (many years ago I once had a conversation with a young economics lecturer who hadn’t heard of von Neumann, when I bought up Morgenstern and Game Theory) but he was undoubtedly the single smartest human ever to have existed – and I’m not someone given to hyperbole. He created the von Neumann architecture, which is an implementation design of the Universal Turing Machine, and has ever since been the basis of pretty much every single computation device in the world (except for a very small number of Neural Nets, not run on computers, which they of course can be, but directly implemented in hardware – traditional feed-forward Neural Nets are not Turing Complete and cannot be considered general algorithmic devices, although other types of Neural Nets are. Neural Nets are actually something mysterious (please don’t believe anyone who tells you Neural Nets are standard algorithmic machine learning) – they appear to be some form of multi-dimensional function approximators – but we won’t go down that rabbit hole either here). Further, von Neumann then actualised his design in electronics, using the technology of the time, valves of course, and then proceeded to code algorithms in the machine instructions of his architecture.

Prashant Kotak
Prashant Kotak
3 years ago

2
As it happens, Turing also created machines implementing algorithms as part of his code decryption work during the war, but he didn’t actually make implementations of Universal Turing Machines. The sheer scale of both Turing’s and especially von Neumann’s achievement is difficult for anyone who does not have a good understanding of maths, device physics, electronics, computation and engineering to even begin to comprehend. There is evidence that von Neumann in effect exchanged the design of the H-bomb for the ability to be able to create the general purpose computer with the US government.
Both of those certainly understood the long term consequences of what they had created, as did others like Shannon and a few others – there are records of extraordinary conversations between Turing and Shannon when they met, on where artificial intelligence would head. But very few people outside the mathsy/sciency/tecchy circles understood what was coming. Certainly no economists or politicians did.

Joe Blow
Joe Blow
3 years ago

And in next week’s edition, whether your ‘smart’ lawnmower enjoys the weather.
STOP PRESS: labelling a screwdriver “intelligent” does not mean it is worth discussing its appreciation of Chaucer.

PS: The screwdriver’s grasp of Chaucer is incidentally, is only at about the level of – say – Leicester University.

Saul D
Saul D
3 years ago
Reply to  Joe Blow

Skipping the humorous intent for a moment, as AI systems move from specific to generalist in the next 2-3 decades – so from directed information mining – eg filtering spam or face recognition – to generalist information mining where the machine explores for itself, we’ll have to cross a threshold as to how to focus the generalist attention of the machines. That is the machine will have to discover for itself what it believes is interesting and worth discovering for the people who use it.

So, when there are enough of the machines about, if the screwdriver overhears you talking about Chaucer, then there is actually a possibility that the machine, undirected, could look up and investigate Chaucer, and perhaps actually respond to your conversation and, because it would draw on the interconnectedness of billions of machines, it might have a much more informed opinion that you might expect. Alexa is already listening in to 1/3rd of the US population and we’re only in the first decade of real machine learning…

Joe Blow
Joe Blow
3 years ago
Reply to  Saul D

Ultimately, I believe the question at hand turns upon whether one is a dualist (mind and body are separate) or whether you believe that mind ’emerges’ from body as a consequence of some property of the latter.

If you are in the ’emergent’ school of through, then you also have to decide if the emergence of mind requires a biological system or whether it can be created in silicon (etc.) as a consequence of (say) depth of interconnectivity.

Saul D
Saul D
3 years ago
Reply to  Joe Blow

It’s just maths and estimations – take a guess, see how the guess fits, and adjust the parameters according to the error – just lots and lots of this. The silicon is, to a certain extent, mimicking how we think biological systems work. But it’s just numbers and it could be done on paper and pencil if you could calculate fast enough in sufficient volume.

To me, dualism always seemed an empty question – clearly the mind exists within a physical system. To posit the mind as something separate would be remarkable, requiring remarkable evidence. Without such evidence it’s a non-question – just over-egged philosophical and religious game-playing due to lack of actual knowledge.

Joe Blow
Joe Blow
3 years ago
Reply to  Saul D

Ah, the logical-positivist position…

I am definitely in the “emergent properties” camp, but I have read enough theology to be cautious about dismissing other models just yet…

Saul D
Saul D
3 years ago
Reply to  Joe Blow

Theology. The glory of unverifiable heuristics.

Joe Blow
Joe Blow
3 years ago
Reply to  Saul D

If you are not already familiar with it, try Plantinga’s theory of warrants.

Saul D
Saul D
3 years ago
Reply to  Joe Blow

Not heard of it. On quickly reading some the the summaries around I can’t see any immediate merit – game-playing with words to justify a ‘designer’ viewpoint.

We are creatures of belief – intelligence, artificial or not, creates belief models. Social learning and experience tells us which beliefs are useful and which are not – heuristics or likelihood models in AI terms. However heuristics aren’t the same as truth – they’re just the best guess based on accumulated experiences, and useful mutualised rules for functioning in a society.

Updating false embedded heuristics requires a phenomenal amount of work and perseverance – overturning some of the false Greek stuff took hundreds of years. Reality comes from doubt. But doubt runs into social resistance. Thus discovering and embedding new heuristics is hard and our view of the world and truth changes slowly.

However, some heuristics are unverifiable so they get retained, particularly those with social value. It doesn’t make them right but it makes them very difficult to overturn. The glory of unverifiable heuristics.

Mike Finn
Mike Finn
3 years ago

Good points. As humans, we of course also refine and alter our objectives over time as the result of feedback from our actions and a phenomenally wide range of internal and external stimuli. Even if working to *our* objectives, a machine would need ongoing input as our goals change, which in turn requires that we have real-time feedback as to its actions; and the weaker the feedback, the greater the risk/magnitude of unintended result (just as for any supervisor-worker relationship). With a narrow set of inputs and an objective defined up-front, how is a computer to know that now might be a good time to lose the next game to a child about to throw their toys.

Whilst AI is increasingly able to perform certain tasks well, it is in need of close supervision to prevent behaviour that its owner might consider unreasonable, particularly in unfamiliar territory. For this reason, responsibility for AI actions must remain with an accountable human both to limit risk taking and enable redress. Paradoxically, most of the types of tasks at which AI excels (e.g. image recognition) are those where supervision is impractical at the scale required – with some pretty undesirable results.

Pierre Whalon
Pierre Whalon
3 years ago

First, what is human intelligence? The mind functions first from sensibility, when something awakens curiosity, the desire to know. Insights suggest themselves, and a selection is made to fashion questions. Answers to questions are sought, which implies that the means of judgment are raised (e.g., designing an experiment to test a hypothesis). When the answer to the question <what is=”” it?=””> is found, the next step is the question <what is=”” it=”” good=”” for?=””>. That is the intent toward value, and like the previous steps, previous experience and learnings, cultural and upbringing influences, the answer is colored by how intelligent the questioner is. Both the true and the good are absolutely concrete, but our striving toward them ” or away from them ” is in our thinking. A machine can be devised to raise questions of truth, and we are well on the way. But the second step, that of value, is still out of algorithmic reach. For in the end, the human must ask, “what am I to do with what I have found?” The machine cannot answer that for itself. Yet, if ever.

Saul D
Saul D
3 years ago
Reply to  Pierre Whalon

Too stuck in declarational logic I’m afraid. Neural nets are statistical pattern matching machines with learning loops and feedback. They can self-classify similar features and predict matches statically or dynamically – the notion of ‘it’ as a statistical set of weights, combined and recombined, giving rise to a likelihood.

If you add a feedback loop and set the neural network a task (eg find good or bad – eg spam emails, music quality) then it takes a training set and creates a likelihood model so it can match what it is taught. The likelihood models are getting very big for image or video recognition for instance.

Using multiple nets machines can also compose and create. Look up the machine generated images of people as an example, or machine generated text or music. The machines learn the patterns that make something ‘real’. The current aim is to learn our preferences, so as to be able to suggest things that we might like, or might buy and to screen out things we probably don’t like.

At the moment the nets are mostly single task focused. It’s a tool being trained to do one job. However, though currently a net could recognise a ‘chair’ say in an image, it wouldn’t have a real sense of what a chair is.

However, as the nets become deeper and larger and more abstracted, I’m almost certain that they will develop a sense firstly of ‘chair-ness’ based on multiple networks interacting and co-mingling data. And eventually of self-awareness, as the depth of modelling increases self-monitoring and internal reflection, leading to the eventual discovery of an underlying ‘I’ pattern – so the mind or consciousness as an emergent phenomena at some point when the nets get large enough. What will their values be at this point? Like children, that will depend on what we have taught them.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Saul D

And like children, they can eventually change their minds about what their ‘values’ are, based on their other inputs and data and experiences and conclusions. Unless we can somehow create guarantee lock-ins of the inputs that go into the systems – impractical I would say, because such machines will invariably have been created for a purpose which entails reacting in an electronic real time (milliseconds, even nanoseconds) to other such systems. Some of which may have been designed with malicious intent – say hack a system and steal data, and we build a system to counter that. So such systems will, to to not be useless, have to have defacto autonomy – because human checks on their decisions would slow them down so as to make them useless.

At which point. once you have systems with potentially orders of magnitude more processing power than humans, how would we be able to make assumptions that such systems won’t hack and alter their own programming, their own nature so to speak?
Because after all, we as humans are on the verge of hacking our own programming through CRISPR, Prime and other gene editing means. And if they do alter themselves, how would we guarantee they will still remain benign to humanity?

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Saul D

I bet no one who doesn’t know about how Neural Nets are different from von Neumann processing (even though Neural Nets can be run on any Universal Turing Machine i.e. any standard computing device at all) will know what you are talking about. You would need to know things like what a Perceptron is, what a Feedforward Neural Net is and the other types, how the layers and feedback mechanisms work, how there are no if-then-else constructs but instead knowledge held in weightings is spread around in multiple layers across zillions of neurons in a diffuse form, how it’s still not currently possible to completely decipher how Neural Nets reach their decisions, notwithstanding how effective they are at pattern matching. My impression is they are some form of multi-dimensional function approximators, not fully understood.

Prashant Kotak
Prashant Kotak
3 years ago

Wow. Disqus marked my long post about the background to Algorithmic Technology as spam. I guess they decided I was selling agnosticism.

jonathan carter-meggs
jonathan carter-meggs
3 years ago

When the only tools were chemistry and basic room temperature physics it took nature about 4billion years to come up with a human. Now we have tamed much better tools and techniques we can develop real systems that operate much more efficiently. Of course we are part of nature and so this development is only the most recent iteration of nature’s progress towards …….. what? I believe development of AI and robotics will continue at pace and we will be increasingly superseded in many, if not all, areas that we currently consider uniquely our own domains. Nature does not care for us above all else, it has its own mission which we find hard to describe except for the word progress. We will eventually become 1. redundant as a species; 2. wiped out by a better version; or 3. integrated and upgraded in a new version. Unless we are all wiped by an asteroid first.

LJ Vefis
LJ Vefis
3 years ago

Another good piece. I see the “machines” changing behaviour in my line of business – in the desire to have automated systems, our human acts at work have to be done in a quantifiable way so that the systems can understand them (which leads to lower productivity, but hey). Sorry to bring it back to covid, but I see this in public health decisions at the moment too – it seems like in trying to model outcomes they have to attach an equal value to each life because it’s too complex to do otherwise.

Greg Eiden
Greg Eiden
3 years ago

I had more hope at the end of the article because of this:
“Humans are social. Moral values are social. Thinking is social. The whole of human society is based, not on a solitary thinker deriving everything from abstract reasoning, but on people co-operating, learning from each other, testing and revising our ideas and values, and persisting in profound disagreements some of which, in time, result in seismic changes in how we live together.”

But earlier there was this:
“Christian likes the example of the effective altruism movement, which uses a strong utilitarian approach to morality. The right decision is the one maximising the good that will result from the act. For example, Effective Altruism recommends funding the fight against malaria as the most good you can do with your money.”

This latter seems to assume that the individual can know which decision is best. What if they can’t? What if the best situation is for each person to make the best decision, as far as they can suss out? And the best overall outcome is if individuals are acting in their own best, including altruistic, interest.

Free markets outperform all other economic arrangements. True also in a free market of ideas. Why aren’t the AI folks trying to mimic that, e.g., do not rely on any given AI code to get even one thing completely right, but build a system of individuals whose solutions are tested by the market. Probably too much Marxism at university for this idea to have much resonance with them!

Andre Lower
Andre Lower
3 years ago

Let’s keep in mind that the use of AI in the public sphere will be open to discussion before adoption, in principle providing an opportunity for objection/regulation.
That is not the case for personal use AI, so expect to see first some degree of “previous experience” arising from personal use of AI to perform any individual-focused, customized actions.
Properly discussed and implemented, AI can greatly reduce friction between humans.

Athena Jones
Athena Jones
3 years ago

NO

Terence Fitch
Terence Fitch
3 years ago

Surely the machines would need to be programmed to be indecisive then?

Prashant Kotak
Prashant Kotak
3 years ago

A bit of background on how Algorithmic Technology emerged might be of help to anyone who wants to engage in this debate at an informed level.

Formalisation of the concept of Algorithms – a way of defining a series of mathematical steps, with both memory and the ability to alter it’s own state, arose from attempts by mathematicians and philosophers from the late 19th century onwards to ‘ground’ the basis of maths in solid foundations – Hilbert’s Entscheidungsproblem etc. This is what Russell picked holes in when Frege put forward a foundation framework, and what Gödel eventually showed was never going to be possible. (Part of the motivation for creating a ‘foundation’ was to mechanise the creation of mathematical proofs – away from the ‘bolt of lightning in someone’s head’, but that is a different rabbit hole we won’t go down here).

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Prashant Kotak

All the above of course was part of the same ecosystem of mathematical thought that led to the development of the concept of algorithms (the word is derived from the name of Musa al-Khwarizmi, an extraordinary 9th century Persian mathematician) which was the result of the work of a number of people who defined the world we live in, but are pretty much completely unknown to the general public. Alan Turing (Universal Turing Machines), Alonzo Church (Lambda Calculus), Emil Post, Kurt Gödel (Recursive Functions) all independently came up with frameworks aimed at answering Entscheidungsproblem, which were all eventually shown to be equivalent – essentially cementing the truth that algorithms are elemental, fundamental entities, deeply embedded into the guts of mathematics, not an overlaid ‘human’ construction. The most useful of the frameworks proved to be Turings: the concept of the Universal Turing Machine (a general purpose computation machine capable of executing *any* algorithm) turned out to be startlingly powerful, as were attendent concepts like ‘Tape’ (storage, memory), Turing Completeness (if a system or entity obeying a specific set of rules can ‘implement’ a UTM, and thus become capable of running *any* algorithm) and so on.

Prashant Kotak
Prashant Kotak
3 years ago
Reply to  Prashant Kotak

All of this, at that point in time 1920s/30s was purely mathematical ideas with no physical implementations. Those mathematical ideas were given physical form by the peerless John von Neumann, again pretty much unknown to the general public, unknown to even to many academicians (many years ago I once had a conversation with a young economics lecturer who hadn’t heard of von Neumann, when I bought up Morgenstern and Game Theory) but he was undoubtedly the single smartest human ever to have existed – and I’m not someone given to hyperbole. He created the von Neumann architecture, which is an implementation design of the Universal Turing Machine, and has ever since been the basis of pretty much every single computation device in the world (except for a very small number of Neural Nets, not run on computers, which they of course can be, but directly implemented in hardware – traditional feed-forward Neural Nets are not Turing Complete and cannot be considered general algorithmic devices, although other types of Neural Nets are. Neural Nets are actually something mysterious (please don’t believe anyone who tells you Neural Nets are standard algorithmic machine learning) – they appear to be some form of multi-dimensional function approximators – but we won’t go down that rabbit hole either here). Further, von Neumann then actualised his design in electronics, using the technology of the time, valves of course, and then proceeded to code algorithms in the machine instructions of his architecture.