There’s nothing new about faking images. Photographic hoaxers have been up to their tricks for a very long time. For instance, it is now over a hundred years since the Cottingley Fairies made their debut – a blatant fake to our eyes, but one which fooled many people at the time, including Sir Arthur Conan-Doyle.
In theory, digital image technology should have made it easier for hoaxers to gull the public – instead it’s made us more sceptical. When jaw-dropping images are circulated on social media there’s always a debate as to whether it’s been ‘Photoshopped’.
Furthermore, if need be, expert analysis can reveal if an image has been digitally manipulated. Or at least, it could. In a piece for Brookings, Alina Polyakova and Chris Meserole warn that the application of artificial intelligence to image manipulation is taking us into new era of undetectable ‘deep fakes’:
“Although computers have long allowed for the manipulation of digital content, in the past that manipulation has almost always been detectable: A fake image would fail to account for subtle shifts in lighting, or a doctored speech would fail to adequately capture cadence and tone. However, deep learning and generative adversarial networks have made it possible to doctor images and video so well that it’s difficult to distinguish manipulated files from authentic ones…
“Deep fakes and the democratization of disinformation will prove challenging for governments and civil society to counter effectively.”
‘Deep learning’, by the way, is a type of machine learning (itself a type of artificial intelligence). A ‘generative adversarial network’ in this context is when you have two AI systems – one that generates fake images, the other that tries to spot them. The first system uses feedback from the second to refine its fakery until the second system can’t spot it anymore.
Polykova and Meserole argue that if we can no longer rely on computer analysis to spot the fakes, then we will have to “invest in new models of social verification.”
Let’s say a photograph appeared of ‘Donald Trump’ accepting a suitcase full of money from ‘Vladimir Putin’. Assuming that no jiggery-pokery could be detected by technological means, one would have to ask whether the scenario was at all plausible – and, if so, whether the circumstantial details corresponded to known facts about the individuals depicted in the images (for instance their movements over the relevant time period).
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe