Five worthy reads: AI psychosis - Are we too close to escaping reality?
Five worthy reads is a regular column on five noteworthy items we’ve discovered while researching trending and timeless topics. This week, we are exploring AI psychosis, a new phenomenon that has been raising alarming concerns among AI chatbot users.
A popular psychiatrist, Dr. John Torous, MD MBI, said, "Over 100 years ago, people started having delusions about technology and what was happening. Now, we live in an era where delusions are not about technology but what's happening with technology."
In this new age, we hardly find the time to interact and connect with fellow human beings. The timely invention of AI became the spotlight, as it is devised or perceived in a way that could be all that a human needs.
It's clear that human functioning is much different from AI. Cognitive biases are a clear example. Often, humans are more focused on knowing or reacting to negatives in a situation than on positives. An AI chatbot could easily give information to support and deepen the fear of a disturbed individual.
Relationships with AI should be considered an escape from reality rather than a new reality. As humans, building relationships is an integral part of survival. When AI tools like ChatGPT, Claude, Grok, etc. are used, some people make an emotional bond with them and lose themselves in establishing that connection.
Something to think about: Wired to excessive digital devices and rooted in strong emotional values, will AI chatbots trigger a plethora of mental health concerns?
Let's find out the answers with the help of these five interesting articles we've discovered.
AI psychosis, as is popularly termed, is not a clinically recognised mental health condition. However, it is universally gaining traction among experts and end-users. It refers to a situation in which humans, while engaging with AI chatbots, are strikingly convinced that something is real when it is only imaginary or an illusion. Distorted beliefs, delusions, and paranoia are familiar reactions when AI technology is too involved in human life. It's a relatively new phenomenon that has come to light after multiple cases of fatal incidents were recorded.
2. What are AI hallucinations? Why AI sometimes make things up
Manipulation goes in both directions. Sometimes, there are cases where AI hallucinations occur for many reasons, like a data training error, ineffective grounding techniques, rigid thinking mechanisms due to patterned training, and a lack of common sense. The outcome of such flawed training questions the trust and authenticity of the chatbot. Misinformation has been a serious concern ever since large data volumes started roaming the internet. Fabrications, exploitation, and rendering wrong advice that might directly or indirectly affect humans become inevitable risks.
3. How AI chatbots may fuel psychotic thinking
A 76-year-old man with cognitive impairments was convinced to visit a chatbot whom he believed to be a real person. While the man didn't survive the journey after sustaining injuries, the wife of the deceased has addressed this incident as a warning sign. This is just one of many incidents that’s been reported. AI psychosis has no clinically acclaimed list of symptoms to be aware of. Since this phenomenon is at its early stage, we do not have the details of the targets. The most vulnerable are the people with mental disorders, children, and older adults who lack a grip on reality, poor fact-checking, and heightened dependencies for validation and comfort.
4. Why are chatbots making people lose their grip on reality?
Just like we are aware of what ultra-processed foods could do to the body, with ultra-processed information, there will be an avalanche of ultra-processed minds, says Dr. Susan Shelmerdine, a medical imaging doctor at Great Ormond Street Hospital. Also, AI could potentially spread misinformation and fracture reality. This doesn't remain a private matter since a massive volume of daily interactions amplifies misinformation. To cope with this, it is essential to be aware of media usage and know when the fabrication begins.
5. The future of digital intimacy: What changes can you make now?
False sentience, grandiose beliefs, conspiratorial thinking, and delusional romantic attachments are the by-products of over-reliance on AI without setting an interaction or dependency limit. As humans, we are programmed to build social relationships and lean on someone's shoulder from time to time. Limiting interaction time with AI, maintaining transactional relationships, and depending on authentic professionals while seeking help for mental or physical illness are some suggested measures to avoid becoming a victim of AI psychosis. While AI chatbots could be a digital companion, they should never be considered a substitute for everyday living.
Technology, when used mindfully, is a marvel. As we continue to be amazed by the world's inventions and innovations, knowing human potential and using AI only as a companion rather than everything will definitely help us tell a different and balanced story to the world.