AI: Why the next call from your family could be a deepfake scammer

Scammers are now using artificial intelligence to replicate voices and faces, making scams even more realistic – here's what to look out for.

Try 6 issues for £9.99 when you subscribe to BBC Science Focus Magazine!

Photo credit: Artemis Diana

Published: August 26, 2023 at 8:00 am

Gone are the good old days of princes offering up their wealth via email and dodgy online prizes that only require all of your passwords and details. Scams are getting both more complicated and a whole lot more convincing.

Thanks to the ongoing boom of artificial intelligence (AI), scammers are now able to replicate the voice of someone you know, and in some cases, even their faces. Not just for the most tech-obsessed either, this technology is available to anyone with a half-decent computer and an internet connection.

Replicating your family in need of cash, friends stuck in a bad place, or just someone you work with asking for a transaction, AI phone scams play on the psychology of trust and fear to get people to hand over money, believing they know the person on the other line.

So how does this technology work, and is there any way to better prepare yourself to deal with the scams of the future? We spoke to Oli Buckley, a professor of cyber security at the University of East Anglia to find out more about these new scams.

What is a deepfake?

While scams continue to come in a variety of different forms, these latest ventures tend to rely on technology known as deepfakes.

“Using an artificial intelligence algorithm, they create content that looks or sounds realistic. That could be video or even just audio," explains Buckley.

“They need very little training data and can create something quite convincing with a standard laptop anyone can buy.”

In essence, deepfakes take examples of footage or audio of someone, learning how to accurately recreate their movements or voice. This can then be used to plant their face on someone else’s body, have their voice read out a script, or a host of other malicious activities.

While the technology sounds complicated, it is actually surprisingly easy for anyone to make a deepfake on their own. All they need is a publicly available video or recording of your voice, and some reasonably cheap software.

“The software can be easily downloaded and anyone can make a convincing deepfake easily. It takes seconds rather than minutes or hours, and anyone with a bit of time and access to YouTube could figure out how to do it,” explains Buckley.

“It’s one of the benefits and curses of the AI boom we’re seeing right now. There is amazing technology that would have been science-fiction not that long ago. That’s great for innovation, but there’s also the flip side with this technology in the wrong hands."

The world of deepfake scams

Since their first uses, deepfakes have been used in malicious ways, ranging from faking political speeches to making pornographic material. But recently their use has seen a rise in the world of scams.

“Being able to make someone do or say whatever you want is quite a powerful ability for scammers. There has been a rise in AI voice scams, where someone will receive a phone call or even a video call of a loved one saying they are in trouble and need money,” says Buckley.

“These are pulled from data available from the internet. They don’t need to be 100 per cent accurate, relying instead on fear and a desperate situation where you panic and overlook inconsistencies."

While these scams can come in many different forms, the usual format is a call from a random number. The person on the other side uses a deepfake to pretend to be a family member or someone who would normally rely on you for money.

This could also take the form of a voicemail where the caller can have a pre-made script ready. In a full-length call from the scammer, there are often long pauses as they get the voice generation to create responses to questions being asked.

With basic technology, these deepfakes are unlikely to be perfect, instead offering a version of someone’s voice that might be slightly off. However, by relying on the stress of a moment, these scammers hope that people won’t notice, or put it down to the caller being stressed.

How to fight back against deepfakes

As these scams become more common, the question arises of both the best way to deal with them, and also whether the public can do anything to make themselves less of a target of these scams.

“It’s easy to be critical when it isn’t happening to you, but it is hard in the moment. Question whether it sounds like them, if it is something they might say themselves, or if it seems like an unlikely situation that they are describing,” says Buckley.

There are pieces of software that can be used to identify a fake, but the average person is unlikely to have this on hand. If you receive a call from a loved one that you’re not expecting and you are suspicious, call them back or text them to check where they are. Consider the reality and go from there.”

To create a realistic deepfake, a surprisingly small amount of audio or footage is actually needed. In the past, this might not have been such a problem, but now for most people, there is plenty of footage and audio of them online.

While it is possible to try and remove all of your online content, this is a big ask, requiring a heavy scrub of both your social media and friends and families as well. Equally, there might be more content from your work, or social groups that have usable footage and audio.

“We all live quite publicly now, particularly because COVID created this sense of online community growing as we were physically separated from everyone,” says Buckley.

“A shift towards living our lives online to a degree and maintaining digital relationships through online personas means there are loads of photos, videos, and audio of us out there. The best option is to simply be objective, consider how likely deepfake content is, and be wary of calls or videos that don’t feel believable.”

A change in mindset

Artificial intelligence has grown drastically in its ability in the last year. While this has resulted in a lot of good, it has balanced out with an equal amount of bad.

While there are methods to track its usage, even in the examples listed above, scammers are quick to adjust their technology once they are able to notice it. At one time, a deepfake could be identified by an obvious kind of blinking of the eye, but it was soon changed.

Instead of trying to look for errors or quirks, Buckley and other experts in the field instead opt for a change in mindset.

“The technology is outpacing the way we think about it and the way we try and legislate for it. We’re kind of just playing catch-up at this point. It is going to get to the point where we’re no longer sure what is real and what is not.

“You can’t just believe your eyes these days, have a think a bit more widely about the videos you see, or the calls you get. Critical thinking is the most important factor when dealing with deepfakes, or any scam like this.”

Buckley argues that it all comes down to the reality of the situation, taking a step back and considering it all.


About our expert, Oli Buckley

Oli is a professor of cyber security at the University of East Anglia. His research focuses on the human aspects of cyber security including privacy and trust, social justice and the way technology can be used against us. His research has been published in journals including Communications in Computer and Information Science, the Journal of Information Security and Applications and Entertainment Computing.


Read more: