September 16, 2021 - 7:00am

Artifical Intelligence has been heralded as ‘the future’ for as long as I can remember. The excitement is understandable: scientists may one-day be able to create an AI that is so efficient it can substitute the manual work of humans, replacing doctors, engineers and even journalists. But professional occupations are not the only area AI can be a powerful force.

Just a few weeks ago I speculated that if apps like Replika, which is supposed to simulate both friendships and romantic relationships, became too effective in their job they may replace certain aspects of human interpersonal relationships. AI could provide the same sort of parasocial digital relationship that one may gain through purchasing an OnlyFans membership, while cutting out the human on the other side of the screen altogether.

What else happens when you can credibly replicate somebody’s actions, voice, body, and image? The arrival of a new AI-drive ‘deepfake’ app gives us reason to be apprehensive — in fact, tech publications have gone as far as vowing not to name the application, for fear that it might cause an unintended spike in popularity.

The reason for the concern comes down to the central features of the app: it can create deepfake pornography with just the click of a button. Deepfaking is the process of superimposing the image of one person on top of a video of another, to make it appear as if the target is performing an action they never have. This tech has been around for a while (you might be familiar with the ReFace App, where you can faceswap yourself into gifs), and some attempts at deepfake pornography have been made. But this latest app is by far the easiest to use.

The existence of deepfake pornography is disturbing, and while for the moment it is still mostly possible to differentiate between what is fake and what is real, researchers fear that we will one day reach a point where the AI is so sophisticated it will be functionally impossible to see if digital content is legitimate.

The venture capitalist Anirudh Pai gives his vision of the post-deepfake takeover internet, stating:

All online content will eventually be fake because of deepfakes. The technology cabal isn’t strong enough to monitor the content on their own platforms, much less that of the entire internet. Perhaps the only medium to fix this is a truth-first internet, underpinned by the blockchain, where people will discern what is real through proof of work. Secondly, the public internet will die and be split up into discrete parts of exclusive information silos where people can police the quality of the information they receive.
- Anirudh Pai

But these private servers are walled gardens, not everyone gets invited. So what happens? As the internet contorts under its own weight, we become desensitised, disinterested. Maybe we’ll decide just to log off, and watch as a new era of Luddism begins.


Katherine Dee is a writer. To read more of her work, visit defaultfriend.substack.com.

default_friend