It’s Saturday morning. You’ve decided to sleep in after last night’s bender, and you can’t be bothered about the sound of your phone ringing. You decide to brush it off and go back to sleep, but the phone won’t stop ringing. You wake up and scan your surroundings. Your wife’s missing. You let the phone ring until it’s silent and bury your head in your pillow to block out the splitting headache that’s slowly building up. A single message tone goes off. You make the ill-fated decision to look at your phone only to see a text from your wife. It’s a compromising video of you and one of your mutual friends from the previous night out. It’s not looking good for you. As you’re trying to make sense of the whole situation, another message appears from your wife. It reads, “I’m leaving you.”
If you didn’t find this analogy relatable, I hate to break it to you—such a scenario could very easily happen to you. Incidents where deepfakes and AI-generated video or audio are being used for slander or extortion are becoming increasingly common. Thanks to easier availability and access to AI technology, scammers and fraudsters are literally being handed the necessary tools to screw you six ways to Sunday. Generative AI tools such as Murf, GPT-4, and Midjourney can all be leveraged to deceive you at a level that’s never been seen before, and the bad news is we’re not prepared for it.
The technology at hand is not 100% equipped to combat AI fraud, but it is still essential for us to know the different ways in which these scammers operate so we can at least be prepared for something like this.
Hey, it’s mom. I need $300.
Voice-generative AI software is extremely effective in mimicking people’s voices. You could very easily overlay Morgan Freeman’s voice narrating a video you took of the night sky, or even David Attenborough narrating a video of your dog acting goofy. When such prominent figures’ voices can easily be manipulated, it’s no surprise that your voice can as easily be cloned and misused as well. Let’s say you get a call from an unknown number one day and, to your surprise, it’s your mother. She says she’s stuck at the local grocery store and she’s lost her credit card. She’s asking you to wire $300 dollars to her account to pay for her groceries. As her concerned child, you probably wouldn’t hesitate a second to send her that money, but recent advancements in AI technology forces us to pause for a second and ask—what if I’m being scammed by my mom? Or at least someone pretending to be my mom. What do you do next—verify her date of birth? Ask her to point out a very specific incident from your childhood that no one else would know about? What a tough spot to be in!
Very recently, Jennifer from Arizona almost fell victim to one of these voice-generative AI scams. She received a call from an unknown number and when she answered, it was her 15-year-old daughter on the line, sobbing hysterically and telling her that she’d been kidnapped. Fearing the worst since her daughter was on a ski trip with her friends at that time, Jennifer was justifiably distraught and would have been willing to pay any amount of money to get her daughter back. Thankfully, within a few minutes, she received a call from her daughter confirming she was safe.
Although Jennifer was able to narrowly escape a financial catastrophe, some aren’t so lucky. In 2020, a bank manager in Hong Kong was contacted by who he believed was the bank’s director and was asked to authorize a transfer of $35 million to facilitate an acquisition. The UAE is currently investigating this fraud as it affected a lot of entities within the region.
Thanks to emerging AI technology coupled with the recent global economic downturn, there was a 40% increase in fraudulent call center activity in 2022 from the previous year. This is a very alarming number considering how much more accessible AI is becoming to the general public. At a corporate level, training must be made mandatory for end users to detect such frauds and scams. Any customer-facing employee or employees with access to data and finances must always play the skeptic whenever such high-impact, impulse requests are made. It’s always a good practice to report such requests to the authorities or, even better, directly contact the person making the request and try to get it documented. You’ll never know if your quick decision-making could end up costing your company millions. Better safe than sorry, I’d say.
Fake news
According to a 2022 Gallup poll, 53% of Americans have shown distrust in legacy media outlets and claim that these outlets could be intentionally misleading people. About 62% of adults are now getting their news from social media sources, and this number is steadily growing. Scammers are taking advantage of this shifting trend by preying on the increasing trust in social media as a news source.
On the day that former US president, Donald Trump, was set to arrive in New York to attend his recent indictment hearing, a few images of Trump being forcefully carried away by police were making the rounds on the internet. Thankfully, there was no backlash from his supporters, but such “fake news” images could have very easily been weaponized to incite violence.
According to a study by Duke Reporters’ Lab, there are 194 fact checking entities established across 60 countries, all employing manual intervention. Are these establishments equipped to detect AI deception? In 2020, Meta started implementing technology such as ObjectDNA and generative adversarial neural networks (GANs). GANs consist of two machine learning systems—the adversary and the verifier. The adversary generates fake news matching certain criteria like the ability to go viral, persuasiveness, and sense of urgency, while the verifier, having access to a large database of real news and some fake news from the adversary, will classify these stories as real or fake. AI-powered fact checking is a real thing. With AI empowering the rumor mills and enabling rampant disinformation online, it’s only sensible that we combat deceptive AI with detective AI.
Alarmingly, over 71% of recipients in a 2022 survey said they are not aware of what a deepfake is, and 57% of people who know what it is say they would be able to spot one. Remote work has made it easier for employees to enjoy the comforts of their homes or a beach-side retreat while they’re working, but employers definitely didn’t foresee the window of opportunity that this would open up for deception. Here’s a couple of really creative deepfake scams that took the internet by storm in 2022:
1. Remote work interview scam
This is a good one. Scammers employ deepfake technology among other creative ways to remotely attend job interviews. They then score jobs in IT and data-driven roles with limitless access to personal data, which they subsequently steal and misuse.
2. Elon Musk BitVex scam
Last year, a video of Elon Musk talking up his new crypto project called BitVex was seen circulating online. The video also saw Binance CEO Changpeng Zhao, among others, giving their endorsements. Turns out, the video was a poorly executed deepfake and, despite the endorsements, BitVex was only able to record $1,700 in deposits. Close, but no cigar.
With new technologies come newer ways to rip you off. Deception can come from someone as close as the person you share your home with. Well, not exactly that person but an indistinguishable, AI-generated carbon copy of that person. Although current AI technology might be at a very nascent stage, it is definitely headed in a direction where it will be practically impossible to spot the fakes from the real ones. Maybe now, after a bit of coaxing, you might be able to get your wife back after that deepfake of you went public, but without the necessary awareness and education, AI-powered scams could ruin you entirely—so be prepared.