Deepfake is an AI-based technology used to produce or alter video content so that it presents something that didn’t, in fact, occur. There exist, on the internet, any number of videos that show people doing things they never did. Real people, real faces, close to photorealistic footage; entirely unreal events.
These videos are called deepfakes, and they’re made using a particular kind of AI. Inevitably enough, they began in porn – there is a thriving online market for celebrity faces superimposed on porn actors’ bodies – but the reason we’re talking about them now is that people are worried about their impact on our already fervid political debate.
The video that kicked off the sudden concern last month was, in fact, not a deepfake at all. It was a good old-fashioned doctored video of Nancy Pelosi, the speaker of the US House of Representatives. There were no fancy AIs involved; the video had simply been slowed down to about 75% of its usual speed, and the pitch of her voice raised to keep it sounding natural. It could have been done 50 years ago. But it made her look convincingly drunk or incapable, and was shared millions of times across every platform.
A lot of our fears about technology are overstated. For instance, despite worries about screen time and social media, in general, high-quality research shows that there’s little evidence of it having a major impact on our mental health. Every generation has its techno-panic: video nasties, violent computer games, pulp novels.
The technology already exists to create convincing — but inauthentic — audio and video content — and those technologies are rapidly improving. Photo editing software, such as Photoshop, has long been used to falsify images, and the public is learning to apply common sense and critical thinking when presented with a picture whose content seems unlikely. However, until recently, video content has been more difficult to alter in any substantial way. As such, video has often been considered proof that something actually happened.
Because deepfakes are created through AI, they don’t require the considerable skill that it would take to create a realistic video otherwise. That means that just about anyone could create a deepfake to promote their chosen agenda. One danger is that people will take such videos at face value; another is that people will stop trusting in the validity of any video content at all.
How deepfakes work ?
- A creep gathers lots of photos of you, online.
He or she could use Facebook, Snapchat, or Google images . . . The easy way is to Google your name, look under Google Images, and get a glimpse of how many are readily available. Searching your name in Facebook or Snapchat also turns up more looks. You “get the picture.” You are editing your online presence, right?
- The creep picks a video to superimpose you on.
The most popular choice these days seems to be pornographic, but it could be any kind of video, even one they took. Imagine a video from your office, of another person who has a similar body shape to you, doing something you’d never do. Video surveillance hosting companies like SmartVue and others have more footage than Youtube already.
- A deepfake program does the work of mapping your face onto the video.
Deepfake programs are constantly improving. Next year’s won’t have these same artifacts that are bugging you in the Trump and Cage videos.
We’re on the brink of a world where what’s real and what’s not will be undetectable by our biological systems–our eyesight, hearing, and evolution-trained response systems. You are used to your sometimes-fake-news being determined by algorithms. You’ve accepted cookies to the point where your advertising is contextualized. Deepfakes are just the next iteration. We’re coming to a place where artificial intelligence is needed to counter artificial intelligence. Deepfake videos are a fuzzy window into how completely our digital reality can be rewired without our permission tomorrow. He we are going to handle it ?