Welcome to Fake Barn Country (Jun Sato/WireImage)

Philosophers of knowledge sometimes invoke a thought experiment involving âFake Barn Countryâ, an imaginary land which for some reason is scattered with lots of convincing barn facades but very few real barns. Somewhat unusually, a man in Fake Barn Country stands in front of a real barn. Does he know there is a barn in front of him? Academic opinions divide at this point, but it at least seems clear that the man himself is likely to be sceptical, assuming he also knows what country he is in.
Today, fake barns are replaced with fake videos and images. âPics or it didnât happenâ is a social media clichĂŠ but may soon become an outdated one. The use of âdeepfakesâ is growing â and the opportunities they bring for epistemic chaos are legion. Entrepreneurial types have already used them to put celebrities in porn, impersonate a CEOâs voice to make fraudulent money transfers, and hack bank facial recognition security software. We are all living in Fake Barn Country now.
As well as the risk of being fooled by misleading clips â watch deepfake Tom Cruise coupling up with Paris Hilton, for instance â there are also obvious worries about exponentially increasing the amount of misinformation out there. In an age already saturated with it, many fear where this is all going, including in Beijing. This week, the New York Times reported that the Chinese authorities have recently introduced strict rules requiring that any deepfakes used âhave the subjectâs consent and bear digital signatures or watermarksâ. It turns out there are some advantages to having one of the worldâs heaviest internet censorship systems. As a commentator told CNBC without apparent irony: âChina is able to institute these rules because it already has systems in place to control the transmission of content in online spaces, and regulatory bodies in place that enforce these rulesâ.
Libertarians in the US, meanwhile, suggest that any attempt to control deepfake technology must be an infringement on free speech. Their main point seems to be that the law has no right to punish speech simply on grounds that it is false. But this is to treat a deepfake as if it were just any old kind of false statement, when in fact itâs potentially a falsehood squared â not just in terms of whatâs being said, but also in terms of whoâs saying it. Itâs hard enough these days to get people to stop believing that reptilian aliens are secretly in control of things, without also showing them convincing videos of their favourite politician saying it too.
Equally, unlike with verbal or written falsehoods, most people wonât have any alternative way of checking whether Nigel Farage really does endorse the existence of lizard people, or whatever. Deepfakes affect the viewer on a visceral level, hacking every hardwired visual and aural recognition system you have. And thereâs another problem, too. Part of the worry about undisclosed deepfakes is to do with audiencesâ unfamiliarity with the technology involved, leaving them especially vulnerable to deception. At the same time, however, once general public literacy about deepfakes improves, then without clear and reliable signposting, thereâs a real chance people wonât trust anything they ever see again â even when itâs completely kosher.
Perhaps wisely fearing the onset of paralysing public distrust, the general media position now appears to be one of disapproval towards covertly introduced deepfakes in factual contexts. When it was discovered in 2021 that three lines of deepfaked audio had appeared in a documentary about chef Anthony Bourdain, mimicking his voice undetectably, there was a lot of subsequent criticism; to this day, reviewers donât seem to know exactly which lines of his were faked. In comparison, the response to the use of an AI-generated simulacrum of Andy Warholâs voice reading out his own diaries in Netflixâs The Andy Warhol Diaries has been relatively positive â worries presumably disarmed by the fact that the presence of deepfaked audio was announced early in the first episode (or perhaps by the fact that apparently Warhol sounded like a robot anyway).
In this second case, though, director Andrew Rossi was able to offer a blanket disclaimer relatively easily because he was faking audio for all of the diary entries in the series, not just some of them. Similarly, in a recent BBC documentary about Alcoholics Anonymous members, all of the faces of the AA member participants were deepfaked to preserve anonymity, but not other participants, again allowing filmmakers a relatively easy way to differentiate for viewers at the beginning of the film.
A harder case for documentary-makers is where they might wish to deepfake some elements more haphazardly â perhaps because, as with the Bourdain film, particular bits of video and audio are missing from the archive, yet narratively important â and donât have any easy way to indicate to the viewer which bits in particular these are. The problem is that telling the audience in advance that some bits of a documentary are deepfaked, without specifying exactly which, will leave them back in Fake Barn Country, unsure whether to trust anything of what they are seeing, real or not. (Indeed, this was my accidental experience with the Warhol documentary. Iâd heard that some bits of archival footage were deepfaked but missed the disclaimer about the voice in particular, so spent the whole series fruitlessly trying to work out what was real Andy or fake Andy).
Faced with the haphazard deepfake, then, it might seem that the Chinese are right, and the use of a watermark is a good way forward â or at least, some visual aspect, simultaneous with the viewerâs experience of the deepfaked image or sound, indicating its lack of authenticity. In the 2020 HBO documentary Welcome To Chechnya, for instance, the visual effects supervisor Ryan Laney deliberately used âunderblurâ and a âhaloâ effect to indicate to the viewer that deepfaking was involved in representing the faces of persecuted Chechen gays and lesbians. He said: âThere were times when we actually backed off on some areas, the machine did better or sharper, and we were concerned that we werenât telling the audience that something was going on.â
I wonder if this requirement presents a bit of a conundrum for filmmakers though. Effectively, deepfaking somebody in a documentary, when it isnât a deliberate attempt to mislead people, is best understood as a form of dramatic re-enactment. Itâs done in the spirit of immersing viewers more completely in a narrative â making them feel imaginatively like they are right there, watching history unfold. Itâs a creative decision on a continuum with using actors in disguise to re-enact key historical moments, as indeed The Andy Warhol Diaries also does copiously. Through the use of actors, we âseeâ him lying with lovers or staring depressively into space, just as through the use of deepfaked audio, we âhearâ him reading out his diary. In neither instance is there any intent to deceive viewers, but rather to immerse them imaginatively in the moment. Even with the deepfaked alcoholics or Chechens, the documentary makersâ goal was to get viewers to relate to them as naturalistically as possible, given anonymity constraints.
Yet while introducing something like a watermark throughout a deepfaked scene may well be optimal from the perspective of responsible epistemology, it is pretty unsatisfactory from the perspective of aesthetics. For any such constant visual reminder is bound to inhibit the viewerâs imaginative engagement in a scene. If, during a dramatic re-enactment using actors, a director added a contemporaneous sign on-screen saying âthese are just actorsâ, the whole point of the exercise would be undermined. I donât see how itâs much different with a reminder to viewers that what they are now seeing is faked.
If Iâm right, then the perhaps unsurprising moral of this story is that, just like forged paintings, or cosmetic surgery, or Andy Warholâs wig, deepfakes only really âworkâ where their status as fake is at least somewhat hidden â whether because it was mentioned only once to viewers and then half-forgotten about, or because it was never mentioned at all in the first place. Whatâs perhaps more surprising is that this seems true even where the intent is mainly to get viewers to imagine something. If the viewer is fully conscious that an image is faked, she will be less likely to believe it; but she will also be unlikely even just to suspend her disbelief in the way that imaginative immersion in a dramatic re-enactment requires. When it comes to deepfakes in documentaries, then, unless you can find a way to use them cleverly, it seems to me you should possibly save your money altogether. For some creative purposes, itâs pointless to keep reminding people they are in Fake Barn Country.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
Subscribe