X Close

What will ‘deep fake’ imagery do to politics?

Progressive surveillance?


June 4, 2018   3 mins

There’s nothing new about faking images. Photographic hoaxers have been up to their tricks for a very long time. For instance, it is now over a hundred years since the Cottingley Fairies made their debut – a blatant fake to our eyes, but one which fooled many people at the time, including Sir Arthur Conan-Doyle.

In theory, digital image technology should have made it easier for hoaxers to gull the public – instead it’s made us more sceptical. When jaw-dropping images are circulated on social media there’s always a debate as to whether it’s been ‘Photoshopped’.

Furthermore, if need be, expert analysis can reveal if an image has been digitally manipulated. Or at least, it could. In a piece for Brookings, Alina Polyakova and Chris Meserole warn that the application of artificial intelligence to image manipulation is taking us into new era of undetectable ‘deep fakes’:

“Although computers have long allowed for the manipulation of digital content, in the past that manipulation has almost always been detectable: A fake image would fail to account for subtle shifts in lighting, or a doctored speech would fail to adequately capture cadence and tone. However, deep learning and generative adversarial networks have made it possible to doctor images and video so well that it’s difficult to distinguish manipulated files from authentic ones…

“Deep fakes and the democratization of disinformation will prove challenging for governments and civil society to counter effectively.”

‘Deep learning’, by the way, is a type of machine learning (itself a type of artificial intelligence). A ‘generative adversarial network’ in this context is when you have two AI systems – one that generates fake images, the other that tries to spot them. The first system uses feedback from the second to refine its fakery until the second system can’t spot it anymore.

Polykova and Meserole argue that if we can no longer rely on computer analysis to spot the fakes, then we will have to “invest in new models of social verification.”

Let’s say a photograph appeared of ‘Donald Trump’ accepting a suitcase full of money from ‘Vladimir Putin’. Assuming that no jiggery-pokery could be detected by technological means, one would have to ask whether the scenario was at all plausible – and, if so, whether the circumstantial details corresponded to known facts about the individuals depicted in the images (for instance their movements over the relevant time period).

I think that public figures would have little to fear from ‘deep fake’ technology – indeed they may have less to fear because even real images of misbehaviour would be assumed to be false in the absence of strong corroborating evidence.

The real problem would be images involving unidentifiable individuals and locations that, therefore, don’t allow for corroboration. Images could be faked to show, say, police brutality that didn’t happen or violence on the part of (in reality) peaceful protesters, or any other scenario that suits some political narrative or other.

What would happen to politically-potent viral images that couldn’t be detected as fake or confirmed as genuine? Could we rely on the big social media companies to make the right call and take them down?

Even if we could, for how much longer will they have the power to do so?

“Blockchain technologies and other distributed ledgers are best known for powering cryptocurrencies such as bitcoin and ethereum. Yet their biggest impact may lie in transforming how the internet works. As more and more decentralized applications come online, the web will increasingly be powered by services and protocols that are designed from the ground up to resist the kind of centralized control that Facebook and others enjoy.”

Looking further into the future, a possible solution presents itself: if the distribution of images can become radically decentralised, then why not the capture of those images?

Imagine a world in which web-connected cameras are tiny, cheap and ubiquitous – enabling a constant visual record to be made of all public space by multiple networks, with no one in overall control. (Think of how many many independent CCTV networks already operate in our cities and then apply a visual surveillance equivalent of Moore’s Law to their proliferation).

You couldn’t get away with concocting or doctoring images from one source, you’d have to do it for all the sources – which would impossible unless they were all hacked simultaneously.

Such a world would, of course, mean an end to anonymity (and therefore privacy) in public spaces, but at least you could always get a witness.


Peter Franklin is Associate Editor of UnHerd. He was previously a policy advisor and speechwriter on environmental and social issues.

peterfranklin_

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments