Last year, headlines were made when for the first time, a computer – AlphaGo, developed by Google-owned Artificial Intelligence pioneers DeepMind – beat the world’s finest player of Go, the ancient Chinese game with more variants than there are atoms in the universe.
Much less reported this year, but hugely more significant, was what happened next inside DeepMind’s laboratories. Their new prototype, AlphaGo Zero, was not taught the game by humans, but given the basic rules, and left to learn by playing games against itself. It was then pitted against the original AlphaGo, and won by 100 games to zero.
This month, four hours after starting to play chess, it beat a grand-master using tactics never seen before in the game’s history. The implications are both exciting and frightening…
…Exciting because of the potential problem-solving capacity of technology that is able to learn for itself and, also, to test theories that humans have never considered…
…But frightening because the more ingrained into our world that AI technology becomes, the more power we are investing in computers that are capable of rapid independent learning and decision-making that is beyond our comprehension – and to some degree our control.
And just imagine advanced AI systems being programmed by a hostile power to test for and exploit weaknesses in our own technical infrastructure, or indeed to hack into inferior AI systems attached to that infrastructure, and re-program them to wreak havoc.
It does not bear thinking about, which may be why it’s been so little reported. Perhaps we need a computer to think it through for us.
Introduction to this Under-reported series.
Summary guide to all under-reported articles in this series.