X Close

The horror of deepfake nudes Non-consensual porn isn't a 'woman's issue' — it shows that anyone's identity can be hijacked

Is this picture real? Credit: George Marks/Retrofile/Getty Images

Is this picture real? Credit: George Marks/Retrofile/Getty Images


October 28, 2020   5 mins

Martin Scorcese’s most recent film, The Irishman, told a story that spanned seven decades. Robert Di Niro and Joe Pesci starred, and in order to “de-age” them, Scorcese used a special three-rig camera and employed dedicated special effects artists for post-production work. The costs ran into the millions — and the results were patchy. Earlier this year, a YouTuber decided to see if he could do any better: using free artificial intelligence software, he bettered Scorcese’s attempt in a week.

It is no exaggeration to say that soon almost everything we see or hear online will be synthetic — that is, generated or manipulated by AI. Machines that can “learn” to do almost anything when ‘trained’ on the right data — and they’ve never had access to more data, nor so much power to churn through it all. Some experts estimate that within 5-7 years, 90% of all video content online will be synthetic. Before long, anyone with a smartphone will be able to make Hollywood level AI-generated content.

One synthetic-text generating model can already generate articles that appear to have been written by a human. AI can be trained to clone someone’s voice even if they’re already dead: an old recording of JFK’s voice has been used to make a clip of the former president reading the Book of Genesis. AI trained on a dataset of human faces can generate convincing fake images of people who do not exist, and it can be taught to insert people into photographs and videos they were not originally in. One YouTuber is working on a project to insert actor Nicholas Cage into every movie ever made.

All this sounds weird at worst and hilarious at best, but this technology has a dark side. It will, inevitably, be misused, and for that most obvious of male-driven reasons. It was reported last week that the messaging app Telegram is hosting a “deepfake pornography bot,” which allows users to generate images of naked women. According to the report, there are already over 100,000 such images being circulated in public Telegram channels; considering that they are almost certainly being shared privately too, the actual number in existence is likely much higher. The women who appear to feature in this cache of publicly-shared fake porn are mostly private individuals rather than celebrities. More disturbingly, the images also include deepfake nudes of underage girls.

Henry Ajder, the lead author of the report, told me that “the discovery of the bot and its surrounding ecosystem was a disturbing, yet sadly unsurprising, confirmation that the creation and sharing of malicious deepfakes is growing rapidly.” When he wrote to Telegram to request a takedown, he “received no response”. The bot and the fake pornography — including that of minors — is still live at time of writing.

Deepfake pornography first emerged less than three years ago, when an anonymous user started posting it on Reddit. He later revealed his methodology: he was using open-source AI software to insert celebrity faces into pornographic films, by training AI on images and videos of his intended target. The end result was surprisingly convincing. When other Redditors started making their own deepfake pornography and news of this AI-assisted community broke to the world, there was a furore: Reddit shut down the community and banned deepfake pornography. But the genie was out of the bottle.

Since its early days on Reddit, an entire deepfake pornography ecosystem has developed online. With a few clicks, it is possible to access deepfake pornography of every (female) celebrity imaginable, from Ivanka Trump and Michelle Obama to Ann Coulter. And celebrities are not the only targets. AI can clone any woman: all that is needed is some training data, and the rapid acceleration of the technology means that less and less training data is required. These days a single picture, a few seconds of a voice recording, or a single video would be enough.

The tools to make synthetic media are becoming more and more accessible. Anyone can try to create their own by using free software; support communities online explain how to use it. It is even possible to buy “learn to deepfake” courses, or to commission a “deepfake artist” for a bespoke “creation” from little as $20. Recently,entrepreneurs have begun to wrap up the technology in easy-to-use app interfaces, so millions of consumers will be able to experiment with making their own AI-generated fake content.

One such app launched last year, calling itself “DeepNude”. It allowed users to upload photos of a woman to generate a deepfake image of her naked, and for a fee of $50, users could pay to remove a watermark so that the image would look authentic. (Because the underlying AI was developed on training data of female bodies, the app only worked on women.) When the DeepNude was released, the demand was so high that the app’s servers crashed under a stampede of downloads. Facing a barrage of negative press, its developers eventually took their creation offline, saying “the world is not ready for DeepNude app”. Weeks later, they quietly sold their software in an anonymous auction for $30,000.

While deepfake pornography ostensibly seems to be a “woman’s issue”, it provides an early and worrying case study for how synthetic media could be weaponised against us all. It is inevitable, for example, that deepfakes will be used as a potent new tool of identity theft and fraud.

Last week, federal prosecutors in the US revealed the case of a Californian widow who was scammed out of nearly $300,000 by an unidentified overseas con man who romanced her using deepfake videos in which he posed as the superintendent of the US Naval Academy. The widow, only identified as “M.M.”, thought she was building a relationship with an admiral named “Sean Buck”, who told her he was stationed on an aircraft carrier in the Middle East. They communicated for months via Skype, during which “Buck” always appeared dressed in his military uniform. According to the prosecutors, “While M.M. believed she was communicating with [Buck] via live chat on Skype, what she was seeing were actually manipulated [deepfake] clips of preexisting publicly-available video of the real Admiral Buck, and not the live video chats that M.M. believed them to be.”

The costs of identity theft and fraud are vast and on the rise. According to an annual report by Javelin Strategy & Research, identity fraud-related losses grew 15% in 2019 to $16.9 billion — in the US alone. This is largely due to the fact that financial institutions’ methods of identifying and responding to fraud are no match for criminals’ high-tech schemes to steal money, which increasingly incorporate deepfakes.

Of course, they will not only be utilised against consumers, but businesses too. The first serious case of deepfake business fraud emerged last year when the Wall Street Journal reported that a British energy company lost €250,000 when scammers used AI to clone the voice of the company’s CEO to demand (over a phone call) that the money transfer be made.

Libel, identity theft and fraud are nothing new — but the potency of such ventures will increase exponentially as synthetic media becomes prolific. Because of its unique ability to “clone”, AI presents a dire threat to an individual’s right to privacy and security. At the moment it is almost impossible to distinguish between authentic and synthetic media, and the quality of the latter is improving. This, then, is the alarming reality: a world in which our identities can be “hijacked” by almost anyone and used against us.

We have reached the critical moment to set standards — to create ethical and legal frameworks — to define how synthetic media should be created, labelled and identified. Given that the AI behind synthetic media is still nascent, we still have (a little) time to influence the effect this technology will have on our societies and the individuals within them. Too often we build exciting technology without considering how it might amplify the worst parts of human nature, or hand weapons to the cruellest impulses.


Nina Schick is an author and broadcaster, specialising in how technology and Artificial Intelligence is reshaping politics. She has advised global leaders on deepfakes, including Joe Biden and the former Secretary-General of NATO.

NinaDSchick

Join the discussion


Join like minded readers that support our journalism by becoming a paid subscriber


To join the discussion in the comments, become a paid subscriber.

Join like minded readers that support our journalism, read unlimited articles and enjoy other subscriber-only benefits.

Subscribe
Subscribe
Notify of
guest

41 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
Adrian
Adrian
4 years ago

On the flip-side, anyone who’s had a real nude get out into the wild will now be heaving a sigh of relief. It can all be explained away now see.

kingdomoflindsey
kingdomoflindsey
4 years ago
Reply to  Adrian

Sounds like you might have a (di)vested personal interest in this!

Adrian
Adrian
4 years ago

They must have trained their AI model on pictures of button mushrooms. Cold button mushrooms.

Swag Valance
Swag Valance
4 years ago
Reply to  Adrian

It’s the equivalent of the Twitter tweet gaff response, “My account was hacked.”

malcolm.rose
malcolm.rose
4 years ago
Reply to  Adrian

Excellent point!

Kiran Grimm
Kiran Grimm
4 years ago

Another alarming report on the rapidly increasing power of computers. Doubtless, the cyber-advocates will describe this as the democratisation of skills. Now anyone can utilise low-cost high-tec to produce convincing fake imagery and sound on their own pet gadgetry.

This disturbing information should be broadcast loud and long in the MSM ““ if they ever get tired of telling us conscience-pricking stories about the loss of free-school meals in the school holidays.

Christopher Barclay
Christopher Barclay
4 years ago

Since before celluloid was invented, men and women have fantasised about having sex with their objects of desire. This technology turns that fantasy into film. The next step would be for an app to also take a photo of the app user and to create a pornographic film of the user and his or her object of desire having sex. It is easy to see why the unwilling party would find this alarming, especially the possibility that the app user will want to move on from a virtual act to actual rape. However, it would probably be easy to prove that the body in the film is not you due to the non-appearance of birthmarks, moles etc. More concerning to me is the ability of the state and other organisations to create synthetic film of ‘crimes’ being committed by fully dressed actors and to convict us of crimes that we didn’t commit.

Juilan Bonmottier
Juilan Bonmottier
4 years ago

this is kind of terrifying!

Nunya Bizniss
Nunya Bizniss
4 years ago

This is also my concern. Chinese-style social credit systems, determining whether you are “good” or “bad” (see Gavin Haynes piece on 29 Oct for evidence that Banks are already moving in this direction) and deepfakes used to condemn and convict anyone deemed to be the latter. And why not heinous crimes, in order to warrant extreme punishments?

ard10027
ard10027
4 years ago

Devil’s advocate time. If you can create completely convincing — but entirely fake — pornographic images, without using real models or exploiting anyone, particularly children, then, BY LIBERAL LIGHTS, what’s the problem? Again, specifically, I’m asking what the problem is BY LIBERAL LIGHTS. As a Christian, I can see a major problem with it, but I’m asking people who consider themselves enlightened and modern, people who think with the “it’s the current year” mentality to explain what problem THEY have with others panting over images of people who don’t exist. After all, it’s all about freedom of the individual, isn’t it?

Alex Lekas
Alex Lekas
4 years ago
Reply to  ard10027

if we use the traditional definition of liberal, I don’t see much support for kiddie pics or for exploitation. Perhaps that is the problem. Today’s left is decidedly illiberal.

Alex Lekas
Alex Lekas
4 years ago
Reply to  ard10027

if we use the traditional definition of liberal, I don’t see much support for kiddie pics or for exploitation. Perhaps that is the problem. Today’s left is decidedly illiberal.

M Spahn
M Spahn
4 years ago
Reply to  ard10027

If completely fake images could keep the pedos out of trouble, I wouldn’t see an issue with it. Have at it, creeps. However as I understand it, such things whet rather than sate their appetites. So that is the issue.

Charles Rense
Charles Rense
4 years ago

Spread this simple credo:

The internet is full of lies
The internet is full of misinformation
The internet is not to be trusted at face value
SO USE YOUR GODDAMN BRAIN!

Geoff Cox
Geoff Cox
4 years ago

A sobering article! But people should also know that there are companies (now in start-up) that are providing software to ascertain whether a image or sound has been tampered with or is an outright fake. Naturally with all these things, the criminals get first-mover advantage. But industry is fighting back. I predict one of the first industries to adopt a new standard will be insurance where they currently often rely on a photo to ascertain damage – especially at low level where it is not worth sending out a loss adjuster.

Mark Corby
Mark Corby
4 years ago

The ‘Ancients’ would die laughing at this piece, and the neurotic nonsense about nudity it so clearly represents.

The Ancient Olympic Games were held entirely gymnos/naked, bar for one event.

What are we so afraid of?

Gerald gwarcuri
Gerald gwarcuri
4 years ago
Reply to  Mark Corby

You missed the point. This article about AI is not about nudity at all. It is about the potential for the de-humanizing of individual people by artificially manipulating their personal attributes and information,

Mark Corby
Mark Corby
4 years ago

I beg to disagree, most have already been de-humanised and nothing can be done to save them.

Judy Johnson
Judy Johnson
4 years ago
Reply to  Mark Corby

I disagree with your comment because there are some areas where we should never give up.

Juilan Bonmottier
Juilan Bonmottier
4 years ago

Is it really dehumanizing if it is all artificial? I think one of the issues is where someone’s ‘brand’ or image is mistaken to be their real self. People seem to really struggle in the understanding of this -I can see why, because those who make brands of themselves present their image as the authentic self (except when things go wrong when they try to get as far away from it as possible) -but we know this is never ever true. The answer is simple (perhaps) -don’t make yourself into a brand and don’t ask others to buy into your image. I guess try to be honest and truthful in all your representations. Don’t be a narcissist I guess.

Adrian
Adrian
4 years ago
Reply to  Mark Corby

Yup, what a bummer when you the ancient greek olympic lottery allocates you tickets for the ancient beach volleyball, and it’s the only one with clothes on.

Andrew D
Andrew D
4 years ago
Reply to  Mark Corby

I think the Winter Olympics should be held entirely naked. Now that would be Spartan!

Mark Corby
Mark Corby
4 years ago
Reply to  Andrew D

What an excellent idea, well said!

Rob Grayson
Rob Grayson
4 years ago

“It is no exaggeration to say that soon almost everything we see or hear online will be synthetic.”

Um, yes it is.

Adrian
Adrian
4 years ago
Reply to  Rob Grayson

But then you would say that.

Rob…… Ot?

M Spahn
M Spahn
4 years ago

Interesting. I remember a flurry of articles about this two or three years ago, and then the subject fell off the map. Apparently it is because our new wokehadi media masters prefer to go dark on topics they don’t think the rabble is mature enough to be apprised of. In any case, another reason to be grateful Unherd is around.

Swag Valance
Swag Valance
4 years ago

The camera has always lied. Finally now we have to face up to that fact.

Charles Rense
Charles Rense
4 years ago
Reply to  Swag Valance

I went to the UK once on vacation, but I forgot to bring my camera. I went to a camera store in London and picked one out, but they wanted to charge me ten pounds more than the tag said!

Stuart McCullough
Stuart McCullough
4 years ago

And to think that my bank was only recently asking me to sign up to their voice recognition system……..

Adrian
Adrian
4 years ago

Yup. Everyone with a passing knowlege of recent advances in AI could tell that wasn’t gonna stop the hordes at the gates for more than a week.

Gerald gwarcuri
Gerald gwarcuri
4 years ago

Identity is everything. How can it be secured? The Mark of the Beast. Without which, no one will be able to buy or sell. Get ready…

Malcolm Ripley
Malcolm Ripley
4 years ago

The solution is to blockchain everything. Even if the fake was put on the blockchain as if it was an original the true original photos/videos will precede it by time.You could also blockchain your calendar/phone/satnav coordinates and thus prove where you really were. This does not stop someone from creating a fake but it provides the incontrovertible truth to send the faker to jail. Plus all who handle the fake (they can confirm the truth by referencing the block chain).

Swag Valance
Swag Valance
4 years ago
Reply to  Malcolm Ripley

No, it isn’t. Let’s not melt the polar ice cap any faster just because you have only a single solution in mind.

Peter Kriens
Peter Kriens
4 years ago

Losing your child or partner is horror. Losing a body part at a work accident is horror. I can imagine being ‘complimented’ with a deep fake video is rather unpleasant but can you seriously call it ‘horror’?

Lindsay Gatward
Lindsay Gatward
4 years ago

Surely we will have to just live with this – If the tech is going to allow creation of a perfect image of whatever turns you on and delete and create it at will then thought police will be the only route but why bother when no one other than the imaginer was involved so where is the crime – Interesting if the tech allowed direct imaging into the brain without even a screen – There may be a virtue signal opposition to male satisfaction with the purely visual – Mind you that would be solved with androids or hard light holograms with mobile emitters – Of course this leads to no babies and therefore no people or can androids have babies as well as dreams.

ralph bell
ralph bell
4 years ago

Fascinating article. I guess could a form of copyright on individuals be a way to protect our identity being used/misused and allow people to give specific permissions to profit form it if they wished. Any without an agreed copyright would be taken down and prosecuted? A digital genetic fingerprint? By the way I am no expert.

Jeff Cunningham
Jeff Cunningham
2 years ago
Reply to  ralph bell

I don’t understand why you were downvoted. I’m thinking along similar lines that tort law could be the solution. It’s a form of libel, isn’t it? Broadly speaking? Eliminate the protections that platforms hosting this stuff enjoy and see how fast they clean it up or get sued out of existence.

kingdomoflindsey
kingdomoflindsey
4 years ago

An engrossing, and important, article. In a complementary article for the new webzine The Brazen Head, Robert Henderson writes about AI’s implications for employment – https://brazen-head.org/202

brianlyn
brianlyn
4 years ago

Require everyone to have an online passport.

Greg Eiden
Greg Eiden
4 years ago

Want to stop this? Start cutting off the hands of the few perps you can find. Publicly. One thing, the only thing, the Mohammedans got right…cutting off the hands of thieves. Deep fake extortionists are at least thieves.

And use a scimitar. If the ax man isn’t very good, you could get lucky and just lose fingers…or maybe the whole forearm. Luck of the draw! The chance you take. After 10 such exhibitions, I’m guessing the incidents will drop a fair bit. If not? Keep cutting!

Bengt Dhover
Bengt Dhover
4 years ago
Reply to  Greg Eiden

The same principle, with a minor but distinct modification, could be applied to deter rapists.