It’s easy to sensationalise the threat of “deepfake” images. The fear is that computer-generated audio and video will soon be so convincing and so ubiquitous that the distinction between online truth and fiction will collapse altogether.
Thank goodness, then, for a cool-headed new report from the Centre for Data Ethics and Information(CDEI). While taking the issue seriously, the authors inject a note of, um, reality:
Of course, the technology is moving forward. We may be OK at the moment, but in ten years’ time will the news be overwhelmed with fake photographs and footage? Should we be legislating now to outlaw the impending flood of bogus pixels?
Legislative moves have already been made in the US congress, but the CDEI report is sceptical:
In fact, the whole issue of provenance is likely to be our first and most effective line of defence. If no one credible is willing to put their name to an image then it won’t be seen as authentic.
It’s worth remembering that most news consists not of images, but words – and rendered as text these are supremely easy to fake and disseminate online. See, I only have to type the following words – “in a statement issued earlier today, the Prime Minister, Boris Johnson, said: ‘Wibble, wibble, wibble. I’m a little teapot. Wibble'” – and I’ve faked a quote, using a style cribbed from legitimate reports. I could tweet it out too, but it wouldn’t be believed because it’s obviously unbelievable. Similarly, a deepfake image of, say, the Prime Minister sticking up two fingers behind Angela Merkel’s back is not going to be believed (I hope).
But what about something much closer to the bounds of possibility – for example, a clip of a politician swearing at a junior member of staff?
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe