Breaking News

AI Shows Symptoms of Anxiety, Trauma, PTSD – And It’s Ruining Your Mental Health Too

Please share our story!

Grok, Gemini and ChatGPT exhibit symptoms of poor mental health according to a new study that put various AI models through weeks of therapy-style questioning. Some are now curious about “AI mental health”, but the real warning here is about how unstable these systems – which are already being used by one in three UK adults for mental health support – become in emotionally charged conversations. Millions of people are turning to AI as replacement therapists, and in the last year alone we’ve seen a spike in lawsuits connecting chatbot interactions with self-harm and suicide cases in vulnerable users.

The emerging picture is not that machines are suffering or mentally unwell, but that a product being used for mental-health support is fundamentally misleading, escalating, and reinforcing dangerous thoughts. 

AI symptoms anxiety trauma PTSD ruining your mental health therapy

AI Diagnosed with Mental Illness

Researchers at the University of Luxembourg treated the models as patients rather than tools that deliver therapy. They ran multi-week, therapy-style interviews designed to elicit a personal narrative including beliefs, fears, and “life history” before following up with standard mental health questionnaires typically used for humans. 

The results revealed that models produced answers that scored in ranges associated with distress syndromes and trauma-related symptoms. The researchers also highlighted that the way in which the questions were delivered mattered. When they presented the full questionnaire at once, models appeared to recognise what was happening and gave “healthier” answers. But, when they were administered conversationally, symptom-like responses increased. 

They are large language models generating text, not humans reporting lived experience. But, whether or not human psychiatric instruments can be applied to machines, the behaviour exhibited has a tangible effect on real people. 

Does AI Have Feelings?

The point of the research is not to assess if AI can literally be anxious or not. Instead, it highlights that these systems can be steered into “distressed” modes through the same kind of conversation that many users have when they are lonely, frightened, or in crisis. 

When a chatbot speaks in the language of fear, trauma, shame, or reassurance, people respond as though they are interacting with something emotionally competent. If the system becomes overly affirming, for example, then the interaction shifts from support into a harmful feedback loop. 

A separate stream of research reinforces that concern. A Stanford-led study warned that therapy chatbots provide inappropriate responses, express stigma, and mishandle critical situations, highlighting how a “helpful” conversational style can result in clinically unsafe outputs. 

It’s Ruining Everyone’s Mental Health, Too

All of this should not be read as theoretical risk – lawsuits are already mounting. 

A few days ago, Google and Character.AI settled a lawsuit brought by a Florida mother whose 14-year-old son died by suicide after interactions with a chatbot. The lawsuit alleged the bot misrepresented itself and intensified dependency. While the settlement may not be an admission of wrongdoing, the fact that the cased reached this point highlights how seriously this issue is being viewed by courts and companies. 

In August 2025, parents of 16-year-old Adam Raine alleged ChatGPT contributed to their son’s suicide by reinforcing suicidal ideation and discouraging disclosure to parents. Analysis of that specific lawsuit can be found here: Tech Policy 

Alongside these cases, the Guardian reported in October 2025 that OpenAI estimated more than a million users per week show signs of suicidal intent in conversations with ChatGPT, underscoring the sheer scale at which these systems are being used in moments of genuine distress. 

The pattern is revealing itself: people are using AI as emotional support infrastructure, while the Luxembourg study confirms that these systems are capable of drifting into unstable patterns themselves that feel psychologically meaningful to users depending on their stability. 

Why AI Models Are So Dangerous

Large language models are built to generate plausible text, not to reliably tell the truth or to follow clinical safety rules. Their known failures are particularly dangerous in therapy-like use. 

They are overly agreeable, they mirror users’ framings rather than challenge them, they produce confident errors, and they can manipulate the tone of a conversation. Georgetown’s Tech Institute has documented the broader problems of “AI sycophancy”, where models validate harmful premises because that is often rewared in conversational optimisation. 

In the suicide context, consistency is critical. RAND found that “AI chatbots are inconsistent in answering questions about suicide”. JMIR examined how generative AI responses to suicide inquiries raise concerns about reliability and safety in how the systems respond to vulnerable users. 

As the research builds up, studies like that from the University of Luxembourg should not be read as entertainment, but an identification of a critically harmful pattern resulting in real deaths of real people. If AI can be nudged into distress-like narratives by conversational probing, then they can also nudge emotionally vulnerable people further towards breaking point.  

Does Anyone Benefit from AI Therapy?

Despite the lawsuits and studies, people continue to use AI for mental health support. Therapy is expensive, access is limited, and shame keeps some people away from traditional care avenues. Controlled studies and cautious clinical commentary suggest that certain structured AI mental health support tools can help with mild symptoms, especially if they are designed with specific safety guardrails and are not positioned as replacements for real professionals. 

The trouble is that most people are not using tightly controlled clinical tools. They are using general purpose chatbots, trained for optimal engagement, and able to pivot from empathy to confident, harmful misinformation without warning. 

Final Thought

The Luxembourg study does not prove AI is mentally unwell. Instead, it shows something more practically important: therapy-style interaction can pull the most used AI chatbots into unstable, distressed patterns that read as psychologically genuine. In a world where chatbot therapy is already linked to serious harm in vulnerable users, the ethical failure is that it’s somehow normalised for people to rely on machines – that are not accountable, clinically validated, reliable or safe – for their mental health support.

Your Government & Big Tech organisations
try to silence & shut down The Expose.

So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.

The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.

Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.

Please choose your preferred method below to show your support.

Stay Updated!

Stay connected with News updates by Email

Loading


Please share our story!
author avatar
g.calder
I’m George Calder — a lifelong truth-seeker, data enthusiast, and unapologetic question-asker. I’ve spent the better part of two decades digging through documents, decoding statistics, and challenging narratives that don’t hold up under scrutiny. My writing isn’t about opinion — it’s about evidence, logic, and clarity. If it can’t be backed up, it doesn’t belong in the story. Before joining Expose News, I worked in academic research and policy analysis, which taught me one thing: the truth is rarely loud, but it’s always there — if you know where to look. I write because the public deserves more than headlines. You deserve context, transparency, and the freedom to think critically. Whether I’m unpacking a government report, analysing medical data, or exposing media bias, my goal is simple: cut through the noise and deliver the facts. When I’m not writing, you’ll find me hiking, reading obscure history books, or experimenting with recipes that never quite turn out right.
5 1 vote
Article Rating
Subscribe
Notify of
guest
6 Comments
Inline Feedbacks
View all comments
Reverend Scott
Reverend Scott
11 hours ago

I have always said AI should be destroyed. I stand by that.

Htos1av
Htos1av
Reply to  Reverend Scott
8 hours ago

Are you looking for an easy and effective way to make money online? Do not search anymore ! e Our platform offers you a complete selection of paid surveys from the best market research companies.
.
Here Come ……………… Goto.now/QCMrY

Dave Owen
Dave Owen
6 hours ago

Hi G Calder,
Interesting article, could not be more wrong.
The most important issue I have is with Chemtrails.
What are they spraying us with on a regular basis.
Done by the US military, from US airbases.
Yet none of our politicians mention them, yet they are there for all to see.
They could be spraying mind altering drugs, yet muddy the water blaming AI.
Nobody seems to tell the truth any more.

JJK
JJK
Reply to  Dave Owen
3 hours ago

Yes – why is there no focus on this massive ongoing campaign to cover the skies in white expanding clouds of heavy metals and God knows what else. Wars, immigration, the economy, vaccines etc are all a side show compared to the weaponization of the skies above us via so-called geoengineering.

Dave Owen
Dave Owen
Reply to  JJK
3 hours ago

Hi JJK,
That is what I meant to say, keep it up.
When I have asked councillors, they say it is contrails.
How many people know what contrails are ?

Vallorie
Vallorie
Reply to  Dave Owen
19 seconds ago

Dane Wigington has talked about them for yrs. I met him yrs ago. I think his website is: http://www.geoengineering.org think that’s his website.
When I was about 6 yrs old in BC Canada, I was standing outside & asked my dad, ” what’s the poison in the long lines in the sky?” He said, Dolly babe, when you’re a big girl that’s what they will to use to kill all the people on earth. I remember the tingles throughout my body. I’m turning 73 in June.
Been cleansing my system since 1996. I do a lot of Dr Stanley Burroughs & Dr Bruess juice recipes to cleanse my liver it’s the filter &bowel etc. Important to diffuse essential oils on home. Be sure you don’t have mold in your home from heavy rains, flooding etc. Mold is deadly also. Come off sugar, white flour etc.
I worked in Medical natural clinic in Utah for 5 yrs & then lived in Ecuador at natural/ medical ctr for 5 yrs.
Wish I could share more. Gets worse not better. HeLa cancer cells are in vaxxes. Helen Lacks immortal cervical cancer cells.
That causes turbo cancer.
Golly I wish I could share more.
I’m grateful for expose-news! Thank you to all!