We’ve all heard the cliché, “Change is the only constant.”

Sure, it’s been overused to a point where it may have lost its meaning, but that doesn’t change the fact that this statement is true—and it couldn’t be more apt when describing the global tech landscape. It’s filled to the brim with pioneers and innovators competing with each other for any possible advantage, and this has brought about some incredible things: the smartphones we take for granted or even the more recent advents of extended reality and spatial computing, for example.

Unfortunately, as with all things, risk is inevitable. While the pioneers and innovators of the tech world are busy at work, there’s another, devious kind of innovator lurking in the shadows. You see, just like the tech experts developing tech to make our lives easier, malicious actors are working tirelessly, concocting increasingly complex and convincing methods of stealing our data. And rather unsurprisingly, AI is also being co-opted into their wicked machinations. Here are four risks we face as a result.

1. Generative-AI-enabled phishing scams

ChatGPT’s writing style is indistinguishable from human writing, meaning you can generate content that sounds legitimate from simple inputs. These language generation capabilities can enable threat actors to craft more effective phishing scams; some of the more obvious giveaways of a scam are poor grammar, spelling errors, and questionable sentence structure, and generative AI can eliminate these writing errors, making it that much harder to tell these malicious emails apart from legitimate correspondence. 

2. AI-generated malicious code

We’re all aware that ChatGPT is capable of writing code. And, just like its content filters, it has guardrails preventing it from creating malware or any other kinds of malicious code.

However, these guardrails aren’t infallible; it’s possible to bypass them if you’re clever enough. Let’s look at it this way; ChatGPT is obviously going to refuse if you explicitly ask it to create malware. What you can do is ask it to generate code for each specific function of the malware separately, make a few adjustments here and there to make it more effective, and compile it yourself. While ChatGPT isn’t capable of writing complex malicious code by itself, it can help low-skilled hackers create basic malware or improve existing code. And even though improvements are continuously being made to the guardrails, we must understand this risk can never be fully eliminated, so it’s important to restrict this activity as much and as quickly as possible. 

3. Malicious deepfakes

Let’s start with the good news: Currently, most deepfakes are very “uncanny valley” and fairly easy to spot by eye. However, rather alarmingly, this tech is becoming increasingly convincing while also, at the same time, more easily accessible. 

When deepfakes first started gaining traction, only those with the skillsets and expertise required  were able to create them; thankfully, most of these experts set about spreading awareness about this tech. But now, thanks to readily available AI tools, anyone can create deepfakes.

Wombo.ai and Avatarify are some of the most widely used deepfake tools today that enable you to use AI models to animate static images of people with ease and in a very short amount of time. Of course, these tools are nowhere near sophisticated as professionally made deepfakes, but this tech is evolving so fast that it’s completely fair to predict that these simple tools could also evolve to a level where they’re generating photorealistic results. 

4. Voice cloning scams

Voice cloning tools are those that use deepfake technology to mimic another person’s voice and speech patterns. While video deepfakes can be detected by eye (at least as of right now), it’s a lot more complicated when it comes to voice cloning.

Just check out this AI-generated Joe Rogan podcast with Steve Jobs and you’ll see (or hear, in this case) what I mean.

I’ll admit, if you’re someone who religiously listens to the Joe Rogan Experience and knows exactly how he sounds, you might be able to tell it’s not him speaking. But to a more casual listener (like in my case), AI Joe Rogan sounds just like real-life Joe Rogan. And unfortunately, there are already reports of this tech being used to scam people.

 

The rise of AI in the last few years is quite similar to the digital revolution of the 20th century. We’re talking about a highly disruptive subset of tech that is already revolutionizing our work, education, and even daily lives. And for all the benefits AI brings, it’s also of the utmost importance to recognize and prepare for the numerous risks it poses. So let’s ensure we’re proactively equipping ourselves with the defensive tools and know-how to effectively combat AI-powered security threats, lest we find ourselves woefully underprepared and under-equipped to deal with threats that could become commonplace in the very near future.