Five worthy reads is a regular column on five noteworthy items we’ve discovered while researching trending and timeless topics. This week, we take a look at the security implications that come with the rise of smart voice assistants.

“Hey Siri.”

“Go on, I’m listening.”

“Why did the chicken cross the road?”

“I have no particular insight into the motivations of chickens.”

These are the kind of conversations I have with the smart voice assistant at my disposal. Siri is my companion in boredom, my ticket to some free entertainment. So, you can imagine my surprise when one night, out of the blue, I heard her say “I can hear you” while on a video call with a friend. After a full minute of staring at my phone dumbstruck, I realized Siri must have been triggered by some word or phrase that sounded similar to “Hey Siri.”

While I am aware of the many benefits that smart voice assistants offer, this incident got me thinking about the flip side of that coin. Ideally, smart voice assistants are not supposed to record conversations or react until the user speaks the wake-up word or phrase loud and clear. However, in the past few months, the web has been abuzz with news stories of voice assistants doing unexpected things without any prompting. An ominous laugh here, an unsolicited recording of a private conversation there, and these intelligent systems have users freaked out across the globe.

This brings us to a pressing concern: If voice assistants can do these things on their own accord, to what extent can they be manipulated by a malicious agent trying to deliberately cause harm? The gamut of activities that these assistants are capable of means that both the online and physical security of voice assistant owners can be endangered if someone gains access to their smart devices.

On that note, let’s take a look at five interesting articles that jump head first into the debate on whether smart voice assistants increase security risks.

  1. The Voice Ecosystems’ Emerging Security Threats

While everything is being done to secure voice assistants, security vulnerabilities seem to be cropping up every other week. If things go south, our only hope is that the good guys detect the weaknesses before any hackers do.

  1. Learning From Alexa’s Mistakes

Training smart speakers to understand and respond to only the voice commands directed at them is easier said than done. Fine-tuning the deep learning algorithms these speakers rely on is a continuous process. Vendors are learning from their mistakes as they go.

  1. Google’s AI Assistant Is a Reminder that Privacy and Security Are Not the Same

AI that can imitate human voice sounds both sinister and amazing. While it is definitely ground-breaking that the latest smart assistants can handle interactions and make appointments on your behalf, it is equally worrisome.

  1. Rise of voice recognition creates new cyber security threat

Virtual assistants and smart speakers are constantly recording plenty of information, which make them lucrative targets for hackers and even for companies trying to personalize targeted advertisements.

  1. As voice assistants go mainstream, researchers warn of vulnerabilities

As the popularity of smart voice assistants rises, so do the threats that come along with them. These assistants are clearly a double-edged sword, which can prove to be fatal until their shortcomings are addressed.  

While voice-activated assistants and gadgets definitely pose very real concerns for users’ privacy and security, it’s not all dark and gloomy on this front. There’s a reason why these assistants have become so popular—the unprecedented convenience they bring to people’s lives. There is no refuting this fact.

However, as with all things good, this innovation comes with its own set of drawbacks which need to be taken very seriously; smart voice assistant vendors need to analyze and tackle risks to consumer safety before we can completely trust these devices.  

What do you think about smart voice assistants? Are they going to be the boon or bane of our lives? Let us know in the comments section below.