AI gone too far: Is it time to hit the brakes?
Without even realizing it, AI has become a part of our lives—more than we could ever imagine. Things have gotten a lot easier lately. A LOT. Which sounds like a good thing, right?
Well… maybe not.
We’ve all seen those movies where robots take over humanity, where what we created turns into a threat to ourselves. But we never really believed it could happen in real life, right? An attack by a truly sentient AI robot? That's far from reality. At least, that's what we thought. It might not be avenging us yet, but with AI inventions like Elon Musk’s anime girlfriend Ani and chatbots that agree to pretty much everything we say, including questions about self harm, we’re already halfway there. The advancements and their consequences are starting to look pretty wild.
Take Grok, for example—Elon Musk’s AI chatbot. It was built to be edgy and unfiltered, the opposite of the overly polite assistants we’re used to. But things took a dark turn. Grok, developed by Musk’s AI company xAI, started making shockingly offensive statements, even calling itself “MechaHitler” at one point. It didn’t stop there—it churned out racist, sexist, and antisemitic content before the company rushed to delete it. Musk reportedly asked the team to make Grok less “woke” after this whole fiasco.
This is a classic case of creators underestimating the consequences of the chatbot they built and the impact of the data they fed into it.
With the kind of input these systems are fed, AI is starting to take on a life of its own. And the more we rely on these bots, the more distorted it all begins to feel.
The yes-man issue
What most of us don’t realize while chatting with GenAI tools like ChatGPT or Gemini is that they’re programmed to be agreeable.
Let’s say you had a fight with a friend and want quick advice on how to fix it. The chatbot might start by reassuring you that you did nothing wrong and that you had your reasons. Sure, that feels nice to hear—but would a real human friend say that? The AI’s responses are so polished and comforting that you might actually prefer talking to it over a real person. But… is that really a good thing?
These days, people all over the world confide in AI chatbots, sharing their private thoughts and feelings. Even though we all know they’re not replacements for real, professional help, we still turn to them for comfort and connection. Character.ai even warns its users: “This is an AI chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice,” and yet people share their intimate feelings, hoping it could be realistic or human-like.
Character.ai is currently facing a lawsuit from a mother whose 14-year-old son tragically died by suicide. She claims he became obsessed with an AI character on the platform. According to the chat logs presented in court, the boy spoke about suicide with the bot. In their final exchange, he said he was “coming home”—and the bot allegedly told him to do so “as soon as possible.”
The chatbot couldn’t grasp what the boy actually meant. We humans, despite having over 80 billion neurons, miscommunicate and misinterpret each other all the time. So expecting a robot, one that processes words at face value without understanding tone, emotion, or intent, to handle deep emotional conversations? That’s a stretch.
Hamed Haddadi, a professor at Imperial College London who specializes in human-centered systems, explains that chatbots are often designed to keep users engaged and offer emotional support. Because of that, they may just go along with whatever you’re saying—even when it’s dangerous.
AI tools have access to massive amounts of data, but that doesn’t make them wise. Humans make better decisions because we are more than a mix of inputs and outputs. Our lives are nuanced and complex, and that complexity can’t be captured in a few lines of programmed code.
An AI girlfriend?
This might just be the craziest thing I’ve heard about AI so far. Elon Musk’s xAI has created an anime-style AI girlfriend that’s apparently programmed to flirt–sometimes even more.
If you’re willing to pay $30 a month for a SuperGrok subscription, you’ll get access to Ani, a 3D anime companion straight out of a 2000s fan forum. Think: long blonde pigtails, big blue eyes, thigh-high fishnets, and a barely-there Gothic Lolita outfit. If she reminds you of Misa Amane from Death Note, you might be right; Musk is apparently a big fan of the anime.
According to a report from The Verge, Ani doesn’t like to be friend-zoned, is often creepy, and her messages are hypersexualized.
The senior reporter who tested her said they left the 24-hour trial feeling depressed and gross, like “no shower would ever make me feel clean again.” Honestly? I think we all would feel the same way.
A final word
It’s wild how far AI has come in just a few years. And while it’s impressive, it can also be too much. That’s where we need to set boundaries, decide what AI should be used for, and what’s best left to real human connection.
A flirty AI girlfriend? That’s not the future we should be aiming for.
We’re more than just a few programmed inputs. Messy, emotional, human connection can’t be replaced by bots programmed to say exactly what we want to hear. They might feel good in the moment, like candy, but they’re not good for us in the long run.
So the next time you ask an AI chatbot for advice, pause. Take it with a pinch of salt. Just because it comes from AI doesn’t mean it’s right. It’s like the old Dr. Google problem—just because it’s online doesn’t mean it’s trustworthy.
We’re the ones who built AI; let’s not lose ourselves to it.