Philosophers of knowledge sometimes invoke a thought experiment involving “Fake Barn Country”, an imaginary land which for some reason is scattered with lots of convincing barn facades but very few real barns. Somewhat unusually, a man in Fake Barn Country stands in front of a real barn. Does he know there is a barn in front of him? Academic opinions divide at this point, but it at least seems clear that the man himself is likely to be sceptical, assuming he also knows what country he is in.
Today, fake barns are replaced with fake videos and images. “Pics or it didn’t happen” is a social media cliché but may soon become an outdated one. The use of “deepfakes” is growing — and the opportunities they bring for epistemic chaos are legion. Entrepreneurial types have already used them to put celebrities in porn, impersonate a CEO’s voice to make fraudulent money transfers, and hack bank facial recognition security software. We are all living in Fake Barn Country now.
Like what you’re reading? Get the free UnHerd daily email
Already registered? Sign in
As well as the risk of being fooled by misleading clips — watch deepfake Tom Cruise coupling up with Paris Hilton, for instance — there are also obvious worries about exponentially increasing the amount of misinformation out there. In an age already saturated with it, many fear where this is all going, including in Beijing. This week, the New York Times reported that the Chinese authorities have recently introduced strict rules requiring that any deepfakes used “have the subject’s consent and bear digital signatures or watermarks”. It turns out there are some advantages to having one of the world’s heaviest internet censorship systems. As a commentator told CNBC without apparent irony: “China is able to institute these rules because it already has systems in place to control the transmission of content in online spaces, and regulatory bodies in place that enforce these rules”.
Libertarians in the US, meanwhile, suggest that any attempt to control deepfake technology must be an infringement on free speech. Their main point seems to be that the law has no right to punish speech simply on grounds that it is false. But this is to treat a deepfake as if it were just any old kind of false statement, when in fact it’s potentially a falsehood squared — not just in terms of what’s being said, but also in terms of who’s saying it. It’s hard enough these days to get people to stop believing that reptilian aliens are secretly in control of things, without also showing them convincing videos of their favourite politician saying it too.
Equally, unlike with verbal or written falsehoods, most people won’t have any alternative way of checking whether Nigel Farage really does endorse the existence of lizard people, or whatever. Deepfakes affect the viewer on a visceral level, hacking every hardwired visual and aural recognition system you have. And there’s another problem, too. Part of the worry about undisclosed deepfakes is to do with audiences’ unfamiliarity with the technology involved, leaving them especially vulnerable to deception. At the same time, however, once general public literacy about deepfakes improves, then without clear and reliable signposting, there’s a real chance people won’t trust anything they ever see again — even when it’s completely kosher.
Perhaps wisely fearing the onset of paralysing public distrust, the general media position now appears to be one of disapproval towards covertly introduced deepfakes in factual contexts. When it was discovered in 2021 that three lines of deepfaked audio had appeared in a documentary about chef Anthony Bourdain, mimicking his voice undetectably, there was a lot of subsequent criticism; to this day, reviewers don’t seem to know exactly which lines of his were faked. In comparison, the response to the use of an AI-generated simulacrum of Andy Warhol’s voice reading out his own diaries in Netflix’s The Andy Warhol Diaries has been relatively positive — worries presumably disarmed by the fact that the presence of deepfaked audio was announced early in the first episode (or perhaps by the fact that apparently Warhol sounded like a robot anyway).
In this second case, though, director Andrew Rossi was able to offer a blanket disclaimer relatively easily because he was faking audio for all of the diary entries in the series, not just some of them. Similarly, in a recent BBC documentary about Alcoholics Anonymous members, all of the faces of the AA member participants were deepfaked to preserve anonymity, but not other participants, again allowing filmmakers a relatively easy way to differentiate for viewers at the beginning of the film.
A harder case for documentary-makers is where they might wish to deepfake some elements more haphazardly — perhaps because, as with the Bourdain film, particular bits of video and audio are missing from the archive, yet narratively important — and don’t have any easy way to indicate to the viewer which bits in particular these are. The problem is that telling the audience in advance that some bits of a documentary are deepfaked, without specifying exactly which, will leave them back in Fake Barn Country, unsure whether to trust anything of what they are seeing, real or not. (Indeed, this was my accidental experience with the Warhol documentary. I’d heard that some bits of archival footage were deepfaked but missed the disclaimer about the voice in particular, so spent the whole series fruitlessly trying to work out what was real Andy or fake Andy).
Faced with the haphazard deepfake, then, it might seem that the Chinese are right, and the use of a watermark is a good way forward — or at least, some visual aspect, simultaneous with the viewer’s experience of the deepfaked image or sound, indicating its lack of authenticity. In the 2020 HBO documentary Welcome To Chechnya, for instance, the visual effects supervisor Ryan Laney deliberately used “underblur” and a “halo” effect to indicate to the viewer that deepfaking was involved in representing the faces of persecuted Chechen gays and lesbians. He said: “There were times when we actually backed off on some areas, the machine did better or sharper, and we were concerned that we weren’t telling the audience that something was going on.”
I wonder if this requirement presents a bit of a conundrum for filmmakers though. Effectively, deepfaking somebody in a documentary, when it isn’t a deliberate attempt to mislead people, is best understood as a form of dramatic re-enactment. It’s done in the spirit of immersing viewers more completely in a narrative — making them feel imaginatively like they are right there, watching history unfold. It’s a creative decision on a continuum with using actors in disguise to re-enact key historical moments, as indeed The Andy Warhol Diaries also does copiously. Through the use of actors, we “see” him lying with lovers or staring depressively into space, just as through the use of deepfaked audio, we “hear” him reading out his diary. In neither instance is there any intent to deceive viewers, but rather to immerse them imaginatively in the moment. Even with the deepfaked alcoholics or Chechens, the documentary makers’ goal was to get viewers to relate to them as naturalistically as possible, given anonymity constraints.
Yet while introducing something like a watermark throughout a deepfaked scene may well be optimal from the perspective of responsible epistemology, it is pretty unsatisfactory from the perspective of aesthetics. For any such constant visual reminder is bound to inhibit the viewer’s imaginative engagement in a scene. If, during a dramatic re-enactment using actors, a director added a contemporaneous sign on-screen saying “these are just actors”, the whole point of the exercise would be undermined. I don’t see how it’s much different with a reminder to viewers that what they are now seeing is faked.
If I’m right, then the perhaps unsurprising moral of this story is that, just like forged paintings, or cosmetic surgery, or Andy Warhol’s wig, deepfakes only really “work” where their status as fake is at least somewhat hidden — whether because it was mentioned only once to viewers and then half-forgotten about, or because it was never mentioned at all in the first place. What’s perhaps more surprising is that this seems true even where the intent is mainly to get viewers to imagine something. If the viewer is fully conscious that an image is faked, she will be less likely to believe it; but she will also be unlikely even just to suspend her disbelief in the way that imaginative immersion in a dramatic re-enactment requires. When it comes to deepfakes in documentaries, then, unless you can find a way to use them cleverly, it seems to me you should possibly save your money altogether. For some creative purposes, it’s pointless to keep reminding people they are in Fake Barn Country.