The subtle (but simple) ways to spot a deepfake image

Deepfake technology is getting better and better. But it's not perfect.

Try 6 issues for £9.99 when you subscribe to BBC Science Focus Magazine!

Photo credit: Alamy

Published: December 1, 2023 at 5:58 pm

Generative artificial intelligences (AIs) are often large multi-model ‘deep’ neural networks that have been trained on lots of images, videos and associated text.

Give a trained model a description of a picture you want, and it can invent new images to match. A generative AI such as DALL-E 2 or Midjourney can create remarkable new images of almost anything, in any style you like, from photorealistic to cartoon.

How are deepfakes made?

Combine the power of generative AIs with other AIs that can automatically detect people in images and video, and you have the power of a special-effects artist: you can switch the face or body of someone, almost invisibly.

You can use other generative AI to reproduce the voice and you can entirely fake photos and videos, and make it look and sound as though someone else was there when they weren’t. You can even remove them entirely and fill in the background, as in the BBC drama The Capture. This is a deepfake.

The tell-tale signs of a deepfake

Generative AI systems are amazing, but the first iterations of this technology were plagued by subtle, yet telling errors:

  1. Look at the details: Are there irregular objects – such as a hand or branch – in the 'wrong' place? Look at where an object passes across a face, and you may see inconsistencies where the original face peeks through. A common example of this is with eyelashes peeking through a person's hair.
  2. Look for realism: Do the colours, shadows and backgrounds match? Look for things that might be impossible or bizarre, like weird hands, a foot merged with a tree, or even too many arms.

Many of these tell-tale signs are disappearing as the technology improves. And this is the unfortunate side-effect of making better photo and video editing software – the better the software becomes, the easier it is for anyone to make undetectable misinformation and deepfakes.

Adobe, who makes this kind of software, is also trying to introduce content authentication so that, in future, we can tell what is real and what isn’t.

Let’s hope they succeed. 

Read more:

 Asked by: Victoria Shields, via email

To submit your questions email us at questions@sciencefocus.com (don't forget to include your name and location)