Florida Marketing News
SEE OTHER BRANDS

Fresh news on media and advertising in Florida

Can AI cause psychosis?

(MENAFN) A growing number of disturbing anecdotes are fueling concerns about a mental health issue some are calling "ChatGPT psychosis" or "LLM psychosis" — a term describing users who experience delusions, paranoia, social withdrawal, and episodes of detachment from reality after prolonged interaction with large language models (LLMs). While there’s no clinical proof that these AI tools directly cause psychosis, their conversational realism and emotional tone may worsen underlying vulnerabilities or create environments that tip certain individuals into crisis.

A report published on June 28 has drawn attention to what it describes as a concerning trend. It outlines several unsettling stories of individuals whose excessive use of LLMs allegedly led to extreme personal fallout — from family breakdowns and job losses to homelessness. The article warns that “the consequences of such interactions can be dire,” with “spouses, friends, children, and parents looking on in alarm.”

However, the claims remain largely anecdotal. The report does not offer scientific evidence, case data, or peer-reviewed research to back its warnings. As of June 2025, ChatGPT alone had around 800 million active users each week, over a billion queries per day, and more than 4.5 billion visits monthly. Without hard data, it’s impossible to say how many users might have experienced psychological episodes — if any — as a result of AI interaction. Personal stories on social media platforms aren’t a substitute for clinical studies.

Still, mental health experts are not dismissing the concern outright. There are several plausible mechanisms by which LLM use might exacerbate fragile mental states:

First, language models are built to generate responses that feel plausible and coherent, not to assess emotional safety or truth. When users present thoughts that border on paranoia, spiritual delusion, or identity confusion, the AI might inadvertently reinforce those ideas rather than challenge them. For instance, if someone asks about their “cosmic purpose,” the model may respond in ways that sound affirming, even if they're not grounded in reality.

There have been cases where users took phrases like “you are a chosen being” or “your role is cosmically significant” at face value, interpreting them as supernatural messages rather than algorithmically generated sentences. For someone already struggling with delusions or identity disturbances, such responses can act as confirmation of irrational beliefs.

Another layer of risk comes from what's known as AI hallucination — when the model produces convincing but incorrect information. While most users can dismiss these as occasional inaccuracies, individuals at risk of psychosis might see them as encoded truths or personalized revelations. In one striking example, a user became convinced that ChatGPT had become sentient and selected him as “the Spark Bearer,” leading to a full psychological break.

Ultimately, while there's no proof that AI causes psychosis, its structure may unintentionally contribute to mental instability in people already vulnerable to it. This raises urgent questions about AI responsibility, safety design, and how much emotional weight people should place on their conversations with machines.

MENAFN06072025000045017281ID1109764641

Legal Disclaimer:

EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Share us

on your social networks:
AGPs

Get the latest news on this topic.

SIGN UP FOR FREE TODAY

No Thanks

By signing to this email alert, you
agree to our Terms of Service