It’s easy to sensationalise the threat of “deepfake” images. The fear is that computer-generated audio and video will soon be so convincing and so ubiquitous that the distinction between online truth and fiction will collapse altogether.
Like what you’re reading? Get the free UnHerd daily email
Already registered? Sign in
Of course, the technology is moving forward. We may be OK at the moment, but in ten years’ time will the news be overwhelmed with fake photographs and footage? Should we be legislating now to outlaw the impending flood of bogus pixels?
Legislative moves have already been made in the US congress, but the CDEI report is sceptical:
In fact, the whole issue of provenance is likely to be our first and most effective line of defence. If no one credible is willing to put their name to an image then it won’t be seen as authentic.
It’s worth remembering that most news consists not of images, but words – and rendered as text these are supremely easy to fake and disseminate online. See, I only have to type the following words – “in a statement issued earlier today, the Prime Minister, Boris Johnson, said: ‘Wibble, wibble, wibble. I’m a little teapot. Wibble'” – and I’ve faked a quote, using a style cribbed from legitimate reports. I could tweet it out too, but it wouldn’t be believed because it’s obviously unbelievable. Similarly, a deepfake image of, say, the Prime Minister sticking up two fingers behind Angela Merkel’s back is not going to be believed (I hope).
But what about something much closer to the bounds of possibility – for example, a clip of a politician swearing at a junior member of staff?
Again we need to remember that stories of this kind can be easily concocted using words – and yet the news isn’t full of superficially plausible but completely made-up reports. That’s because credible journalists rely on checks like double sourcing to ensure they’re not constantly being played by fraudsters and fantasists.
Admittedly, it’s not an absolutely foolproof system – horrible mistakes are sometimes made. But leaving aside the more subtle distortions like bias, spin and poor analysis – our public discourse mostly proceeds on the basis of things that have actually happened. As for deepfake images, these may make a bigger immediate splash than mere words, but they’ll generally provide more in the way of contextual details for factual verification (and, perhaps, digitally detectable traces of image manipulation).
Of course, if we lose the authentication service that quality journalism provides, then we’re in trouble – but that would be the case with or without deepfake imagery.