Top tips is a weekly column where we highlight what’s trending in the tech world today and list ways to explore these trends. This week, we’re discussing the dangers surrounding deepfakes.

In an era where AI is advancing at breakneck speed, deepfake technology is becoming one of the most disturbing cyberthreats out there. Imagine a world where a hacker can steal and mimic your face, your voice—your entire identity! And all of this with just a single photo and a few seconds of audio. Sounds like science fiction, right? It’s already happening!

What started as a tool for entertainment that was used in movies and social media has evolved into a sophisticated weapon for cybercriminals. The rise of ultrarealistic deepfakes is fuelling fraud, identity theft, and large-scale financial crimes. Hackers no longer need extensive resources or technical expertise—AI-powered tools can now create convincing deepfakes in minutes. This rapid advancement poses serious threats not just to individuals but also to businesses, governments, and financial institutions. And with cybercriminals always attempting to stay one step ahead, knowing how to protect yourself and your organization is more important than ever.

Real-world deepfake scams that will shock you   

Deepfake fraud is no longer just theoretical—it’s happening right now. Here are some examples:

  • Elon Musk crypto scam: Hackers used deepfake videos of Elon Musk to promote fake cryptocurrency schemes. People lost hundreds of thousands of dollars.

  • Grandparent scam: Criminals cloned voices of family members, calling their loved ones and pretending to be in danger, urgently asking for money.

  • Corporate heist: A British engineering company was tricked into wiring $25 million after an employee had a deepfake video call with what appeared to be a member of senior management.

Why this is so dangerous 

The worst part? Most people don’t even question whether a video or call is fake. When a loved one calls sounding distressed, asking for help, our first instinct isn’t scepticism—it’s to react. And that’s exactly what scammers count on.

How to protect yourself from deepfake scams 

  1. Zero Trust approach: Assume nothing is real. If you receive a call or video from someone claiming to be a loved one in trouble, verify it through another method (for example, calling them directly).

  2. Use safe words: The FBI suggests setting up a family “safe word” that only close relatives know. If someone calls you in distress, ask for the safe word before acting.

  3. Limit your digital footprint: The less audio and video of you online, the harder it is for scammers to clone you. Be cautious about what you share publicly.

  4. Use AI detection tools: Tools like Intel’s FakeCatcher, Microsoft’s Video Authenticator, and Reality Defender can help identify deepfakes.

Why it matters now more than ever 

Deepfakes are no longer a futuristic nightmare—they’re here, and they’re being weaponized. Whether it’s stealing money, spreading disinformation, or committing fraud, cybercriminals are using AI-powered deception like never before.

The best defense? Awareness. Question what you see and hear, verify identities through independent channels, and stay informed about the latest scams. In a world where anyone’s face and voice can be stolen, trust—but verify—has never been more important.