Philosophers of knowledge sometimes invoke a thought experiment involving “Fake Barn Country”, an imaginary land which for some reason is scattered with lots of convincing barn facades but very few real barns. Somewhat unusually, a man in Fake Barn Country stands in front of a real barn. Does he know there is a barn in front of him? Academic opinions divide at this point, but it at least seems clear that the man himself is likely to be sceptical, assuming he also knows what country he is in.
Today, fake barns are replaced with fake videos and images. “Pics or it didn’t happen” is a social media cliché but may soon become an outdated one. The use of “deepfakes” is growing — and the opportunities they bring for epistemic chaos are legion. Entrepreneurial types have already used them to put celebrities in porn, impersonate a CEO’s voice to make fraudulent money transfers, and hack bank facial recognition security software. We are all living in Fake Barn Country now.
As well as the risk of being fooled by misleading clips — watch deepfake Tom Cruise coupling up with Paris Hilton, for instance — there are also obvious worries about exponentially increasing the amount of misinformation out there. In an age already saturated with it, many fear where this is all going, including in Beijing. This week, the New York Times reported that the Chinese authorities have recently introduced strict rules requiring that any deepfakes used “have the subject’s consent and bear digital signatures or watermarks”. It turns out there are some advantages to having one of the world’s heaviest internet censorship systems. As a commentator told CNBC without apparent irony: “China is able to institute these rules because it already has systems in place to control the transmission of content in online spaces, and regulatory bodies in place that enforce these rules”.
Libertarians in the US, meanwhile, suggest that any attempt to control deepfake technology must be an infringement on free speech. Their main point seems to be that the law has no right to punish speech simply on grounds that it is false. But this is to treat a deepfake as if it were just any old kind of false statement, when in fact it’s potentially a falsehood squared — not just in terms of what’s being said, but also in terms of who’s saying it. It’s hard enough these days to get people to stop believing that reptilian aliens are secretly in control of things, without also showing them convincing videos of their favourite politician saying it too.
Equally, unlike with verbal or written falsehoods, most people won’t have any alternative way of checking whether Nigel Farage really does endorse the existence of lizard people, or whatever. Deepfakes affect the viewer on a visceral level, hacking every hardwired visual and aural recognition system you have. And there’s another problem, too. Part of the worry about undisclosed deepfakes is to do with audiences’ unfamiliarity with the technology involved, leaving them especially vulnerable to deception. At the same time, however, once general public literacy about deepfakes improves, then without clear and reliable signposting, there’s a real chance people won’t trust anything they ever see again — even when it’s completely kosher.
Perhaps wisely fearing the onset of paralysing public distrust, the general media position now appears to be one of disapproval towards covertly introduced deepfakes in factual contexts. When it was discovered in 2021 that three lines of deepfaked audio had appeared in a documentary about chef Anthony Bourdain, mimicking his voice undetectably, there was a lot of subsequent criticism; to this day, reviewers don’t seem to know exactly which lines of his were faked. In comparison, the response to the use of an AI-generated simulacrum of Andy Warhol’s voice reading out his own diaries in Netflix’s The Andy Warhol Diaries has been relatively positive — worries presumably disarmed by the fact that the presence of deepfaked audio was announced early in the first episode (or perhaps by the fact that apparently Warhol sounded like a robot anyway).
In this second case, though, director Andrew Rossi was able to offer a blanket disclaimer relatively easily because he was faking audio for all of the diary entries in the series, not just some of them. Similarly, in a recent BBC documentary about Alcoholics Anonymous members, all of the faces of the AA member participants were deepfaked to preserve anonymity, but not other participants, again allowing filmmakers a relatively easy way to differentiate for viewers at the beginning of the film.
Join the discussion
Join like minded readers that support our journalism by becoming a paid subscriber
To join the discussion in the comments, become a paid subscriber.
Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.
SubscribeThis is surely not that hard. If the deep fakers are far-right activists or incels then the severest penalties are appropriate. If the deep fakers are Extinction Rebellion or Just Stop Oil activists, then no action is required.
If a government agency commits a deep fake… Well, that is something that only “philosophers of knowledge” can handle.
I am one of those who see lizard people everywhere, although they are deepfaked so well you cannot tell except by ‘just knowing’; and that they have powerful positions in different organizations – like the WEF, Blackrock, and the Biden White house.
But what I am really worried about is ChatGPT. I was listening to some guy on ‘Rumble’ explaining how 20% of all jobs will be taken by it within 5 years. He talked of buying insurance, and how the person in the Insurance office is done – ChatGPT can do it. That is a pretty well paid job. Doctors, programmers, all those WFH, (work from home), maybe half of them are ‘unemployed men working’…… the pay checks have not stopped yet because some ChatGPT bot has not formatted their work into a ChatGPT system. But soon it will get around to it.
So what happens then? Naturally we think of the Luddites, and actually that is as far as I can get with the problem.
The WEF’s absolutely terrifying Lizard guy (and I think a follower of Sa * an), Yuval Noah Harari, says the biggest problem the world faces is what to do with all the useless people about to become economically pointless. He says a combination of drugs, VR, and computer games is what he sees…He believes students today have no idea what to study – as what will be useful in a decade is unknowable, and people will have to be reinventing themselves and their skills at breakneck speed the rest of their life, or be pushed out onto some Universal welfare.
He also points out correctly, that people have been calling this wolf for thousands of years, but always the ones jobless find something to do after industry replaces them – but he points out; in the boy cries wolf story, at the end, the wolf does arrive…..
”Harari calls it “the rise of the useless class” and ranks it as one of the most dire threats of the 21st century. In a nutshell, as artificial intelligence gets smarter, more humans are pushed out of the job market. No one knows what to study at college, because no one knows what skills learned at 20 will be relevant at 40. Before you know it, billions of people are useless, not through chance but by definition.”
Crazy days coming; Fast!
Just on the subject of ‘what to study’, it’s probably no coincidence that – leaving vocational education aside for one moment – the point of university education used to be to acquire skills in ‘how’ to think, which is rapidly being superceded by learning ‘what’ to think. Not being able to usefully question the data and information you’re being presented with in everyday life is no doubt a useful facet for those wishing to exert control over populations. (How they themselves learn to think is another matter!)
Back to vocational education. It seems to me that the future for medical practitioners (erstwhile known as doctors) would be skills in engaging with those presenting with medical problems: in interpreting, in explaining, in discussing options with an empathetic human, rather than in diagnosis. A different profession then, to a major extent. How this might translate to the other professions such as law may well be similar. As for teaching though… hmm, takes us back to the original issue around ‘what’ rather than ‘how’.
If the increased use of AI creates wealth, then this is really not such a hard problem to solve, assuming of course, there is a will to solve it. Provide basic income to all adult humans perhaps, and tax hard the AI deployers. It is depressing Hariri uses such hateful (yes!) language, oozing with contempt for ordinary people, you might say for human beings, but that is the way these days for these oh-so-superior types.
And how do you propose to detect who is and isn’t using AI? For example, how would a teacher know if an essay was generated by AI or created by the candidate?
I don’t think the problem is that we won’t have enough money/wealth/resources to provide for these ‘useless’ people which will be almost all of us (even me!) but that we will all lack any purpose or reason to get out of bed. To remember the saying “All work and no play makes Jack a dull boy, all play and no work makes Jack a mere toy”. We are all going to be toys. And at some point the toy will be worn out and binned.
And how do you propose to detect who is and isn’t using AI? For example, how would a teacher know if an essay was generated by AI or created by the candidate?
I don’t think the problem is that we won’t have enough money/wealth/resources to provide for these ‘useless’ people which will be almost all of us (even me!) but that we will all lack any purpose or reason to get out of bed. To remember the saying “All work and no play makes Jack a dull boy, all play and no work makes Jack a mere toy”. We are all going to be toys. And at some point the toy will be worn out and binned.
Yuval Noah Harari is my hero tbh, no mention of any devil worship in his brilliant books. But drugs, VR and games sounds awesome, if I can fit them in between golf and fishing.
Just on the subject of ‘what to study’, it’s probably no coincidence that – leaving vocational education aside for one moment – the point of university education used to be to acquire skills in ‘how’ to think, which is rapidly being superceded by learning ‘what’ to think. Not being able to usefully question the data and information you’re being presented with in everyday life is no doubt a useful facet for those wishing to exert control over populations. (How they themselves learn to think is another matter!)
Back to vocational education. It seems to me that the future for medical practitioners (erstwhile known as doctors) would be skills in engaging with those presenting with medical problems: in interpreting, in explaining, in discussing options with an empathetic human, rather than in diagnosis. A different profession then, to a major extent. How this might translate to the other professions such as law may well be similar. As for teaching though… hmm, takes us back to the original issue around ‘what’ rather than ‘how’.
If the increased use of AI creates wealth, then this is really not such a hard problem to solve, assuming of course, there is a will to solve it. Provide basic income to all adult humans perhaps, and tax hard the AI deployers. It is depressing Hariri uses such hateful (yes!) language, oozing with contempt for ordinary people, you might say for human beings, but that is the way these days for these oh-so-superior types.
Yuval Noah Harari is my hero tbh, no mention of any devil worship in his brilliant books. But drugs, VR and games sounds awesome, if I can fit them in between golf and fishing.
I am one of those who see lizard people everywhere, although they are deepfaked so well you cannot tell except by ‘just knowing’; and that they have powerful positions in different organizations – like the WEF, Blackrock, and the Biden White house.
But what I am really worried about is ChatGPT. I was listening to some guy on ‘Rumble’ explaining how 20% of all jobs will be taken by it within 5 years. He talked of buying insurance, and how the person in the Insurance office is done – ChatGPT can do it. That is a pretty well paid job. Doctors, programmers, all those WFH, (work from home), maybe half of them are ‘unemployed men working’…… the pay checks have not stopped yet because some ChatGPT bot has not formatted their work into a ChatGPT system. But soon it will get around to it.
So what happens then? Naturally we think of the Luddites, and actually that is as far as I can get with the problem.
The WEF’s absolutely terrifying Lizard guy (and I think a follower of Sa * an), Yuval Noah Harari, says the biggest problem the world faces is what to do with all the useless people about to become economically pointless. He says a combination of drugs, VR, and computer games is what he sees…He believes students today have no idea what to study – as what will be useful in a decade is unknowable, and people will have to be reinventing themselves and their skills at breakneck speed the rest of their life, or be pushed out onto some Universal welfare.
He also points out correctly, that people have been calling this wolf for thousands of years, but always the ones jobless find something to do after industry replaces them – but he points out; in the boy cries wolf story, at the end, the wolf does arrive…..
”Harari calls it “the rise of the useless class” and ranks it as one of the most dire threats of the 21st century. In a nutshell, as artificial intelligence gets smarter, more humans are pushed out of the job market. No one knows what to study at college, because no one knows what skills learned at 20 will be relevant at 40. Before you know it, billions of people are useless, not through chance but by definition.”
Crazy days coming; Fast!
This is surely not that hard. If the deep fakers are far-right activists or incels then the severest penalties are appropriate. If the deep fakers are Extinction Rebellion or Just Stop Oil activists, then no action is required.
If a government agency commits a deep fake… Well, that is something that only “philosophers of knowledge” can handle.
“ It’s hard enough these days to get people to stop believing that reptilian aliens are secretly in control of things”
That is precisely what the Reptilians would say.
“ It’s hard enough these days to get people to stop believing that reptilian aliens are secretly in control of things”
That is precisely what the Reptilians would say.
Wait, are you saying the lizard who keeps trying to sell me car insurance is a fake?!
Wait, are you saying the lizard who keeps trying to sell me car insurance is a fake?!
“…If the viewer is fully conscious that an image is faked, she will be less likely to believe it; but she will also be unlikely even just to suspend her disbelief in the way that imaginative immersion in a dramatic re-enactment requires…”
There is a simple way round this though. Don’t tell people at the start about the fakery, tell ’em at the end – like the disclaimer in books in the smallprint at the front which no one ever reads, that resemblance to real persons is purely coincidental, before the book proper commences – but with the fakery disclaimed at the end instead.
And I’m willing to bet this would fly, because it works for both sides. From the creators side, this approach would get round the legal requirement of the accusation of peddling fraud, but more importantly, it would likely trigger a double engagement with the creative work, because once told after the fact that the consumed experience had fakery, consumers might be tempted into reconsuming the same content again, this time with the aim of detecting the fake elements.
And from the consumers stance, it is crystal clear that we crave immersion, and are completely willing to embrace fakery in the context of creative content. We are hardly going to watch, say, a horror movie, if there was a highlighted bar at the bottom, mandated by statute (in effect the Chinese approach), continually flashing the message ‘these are all fake people, and this is all fake’ in fluorescent yellow. Ditto for the punch and judy of PMQ. Much more acceptable to be told at the end instead. So, because we are willing enough victims, we as consumers are likely to buy not being told until the end too, whenever the end is. For example, I fully expect the Tory party to disclaim after the next election that the whole of the last half decade was just a big spoof, a jape. And we, the consumers, will happily forgive them for it.
“…If the viewer is fully conscious that an image is faked, she will be less likely to believe it; but she will also be unlikely even just to suspend her disbelief in the way that imaginative immersion in a dramatic re-enactment requires…”
There is a simple way round this though. Don’t tell people at the start about the fakery, tell ’em at the end – like the disclaimer in books in the smallprint at the front which no one ever reads, that resemblance to real persons is purely coincidental, before the book proper commences – but with the fakery disclaimed at the end instead.
And I’m willing to bet this would fly, because it works for both sides. From the creators side, this approach would get round the legal requirement of the accusation of peddling fraud, but more importantly, it would likely trigger a double engagement with the creative work, because once told after the fact that the consumed experience had fakery, consumers might be tempted into reconsuming the same content again, this time with the aim of detecting the fake elements.
And from the consumers stance, it is crystal clear that we crave immersion, and are completely willing to embrace fakery in the context of creative content. We are hardly going to watch, say, a horror movie, if there was a highlighted bar at the bottom, mandated by statute (in effect the Chinese approach), continually flashing the message ‘these are all fake people, and this is all fake’ in fluorescent yellow. Ditto for the punch and judy of PMQ. Much more acceptable to be told at the end instead. So, because we are willing enough victims, we as consumers are likely to buy not being told until the end too, whenever the end is. For example, I fully expect the Tory party to disclaim after the next election that the whole of the last half decade was just a big spoof, a jape. And we, the consumers, will happily forgive them for it.
It would be interesting to see the reaction of these libertarians when, as is inevitable being such excellent ironic ‘targets’, they become the subject of such deepfake videos and ‘documentaries’, killing their reputations and careers. I suspect they won’t like it.
It would be interesting to see the reaction of these libertarians when, as is inevitable being such excellent ironic ‘targets’, they become the subject of such deepfake videos and ‘documentaries’, killing their reputations and careers. I suspect they won’t like it.
We all run to it every day and it seems to have all the answers. I’ve determined that the Internet is the AntiChrist. (Turns off replies.)
Soon you will need the thumbprint of your right hand to buy and sell in the market place, and possibly something on your forehead (face recognition) to enter the internet)
I could say something about the Mark Of The Beast but I won’t.
I could say something about the Mark Of The Beast but I won’t.
Just checking, LOL.
But very possibly you’re correct.
Soon you will need the thumbprint of your right hand to buy and sell in the market place, and possibly something on your forehead (face recognition) to enter the internet)
Just checking, LOL.
But very possibly you’re correct.
We all run to it every day and it seems to have all the answers. I’ve determined that the Internet is the AntiChrist. (Turns off replies.)
We might be reptiles, but please don’t call us aliens. Some of us were hatched right here on Earth.
“If you stick (sic) us do we not bleed? If you tickle us do we not laugh?”
(p***k was censored!)
We might be reptiles, but please don’t call us aliens. Some of us were hatched right here on Earth.
“If you stick (sic) us do we not bleed? If you tickle us do we not laugh?”
(p***k was censored!)
The problem with the internet is that people beleive what they see on it so the more deep fakes the better because then in time no one will trust anything – we will assume it is all fake and no one can be offended by anything. Once everyone is suitably sceptical of everything then anyone wanting to present something as true will have to gain their viewers trust by declaring the standards they adopt in their research and presentation and allowing it to be questioned. Only those sites need be willing to have their honesty audited. There will be no need to audit anyone who does not seek to be trusted. Trying to set standards for the entire internet is impossible and pointless. Free speech can be accomodated by allowing different audit bodies, each can cater for a different perspective on life.
The problem with the internet is that people beleive what they see on it so the more deep fakes the better because then in time no one will trust anything – we will assume it is all fake and no one can be offended by anything. Once everyone is suitably sceptical of everything then anyone wanting to present something as true will have to gain their viewers trust by declaring the standards they adopt in their research and presentation and allowing it to be questioned. Only those sites need be willing to have their honesty audited. There will be no need to audit anyone who does not seek to be trusted. Trying to set standards for the entire internet is impossible and pointless. Free speech can be accomodated by allowing different audit bodies, each can cater for a different perspective on life.
Presumably libel law applies to the creation of deepfakes making it appear that the person falsely depicted is doing something unsavory, even as laws against fraud apply to use of deepfakes to spoof security data. We may need a few tweaks to the law so that anyone defamed by a deepfake or alleging fraudulent activity using a deepfake has the right to access social media company to track down the right person to sue (I thing governments throughout the developed world can already get the info needed for a fraud investigation), but once that’s done a few large settlements should tamp down any real problems from the technology.