AI-Induced Psychosis: How Technology May Affect Mental Health

A note about AI: On the Talkspace blog we aim to provide trustworthy coverage of all the mental health topics people might be curious about, by delivering science-backed, clinician-reviewed information. Our articles on artificial intelligence (AI) and how this emerging technology may intersect with mental health and healthcare are designed to educate and add insights to this cultural conversation. We believe that therapy, at its core, is focused around the therapeutic connection between human therapists and our members. At Talkspace we only use ethical and responsible AI tools that are developed in partnership with our human clinicians. These tools aren’t designed to replace qualified therapists, but to enhance their ability to keep delivering high-quality care. To learn more, visit our AI-supported therapy page.

Tools that use artificial intelligence (AI) are becoming a bigger part of daily life. They can help you answer questions, provide support, or act as a sounding board for brainstorming. For many people, AI tools are a helpful addition to help streamline common daily tasks. However, excessive use of AI tools and chatbots can blur the line between a helpful technology and something that can negatively affect your mental health. 

With the growing popularity of AI tools like ChatGPT, Grok, and Google Gemini, there have been more and more reports of people with mental health issues tied to AI interactions. In some cases, people have experienced symptoms similar to psychosis (losing contact with reality). 

It’s easy to get swept up in the excitement of a new technology, but it’s important to understand the potential harms that can happen with excessive use. In this article, we’ll review what AI-induced psychosis is, outline the risks, and share some strategies to protect your mental health in today’s tech-driven world. 

What Is AI-Induced Psychosis?

AI-induced psychosis — also sometimes referred to as AI psychosis or ChatGPT-induced psychosis — is a term used to describe when extended or unhealthy interactions with AI contribute to distorted thinking or a detachment from reality.

It’s important to know that AI-induced psychosis isn’t a clinical diagnosis that you’ll receive from a psychiatrist or psychologist. It’s an emerging phenomenon that researchers are just beginning to study. Right now, the information on AI psychosis comes from case reports, news stories, and social media, where people have shared their experiences. There are still a lot of unknowns about the symptoms and causes of AI psychosis.

Some experts worry that because AI chatbots mimic natural conversations in a way that feels human, they may amplify symptoms in people who may already be prone to psychosis. Heavy reliance on AI chatbots could also create new patterns of thinking that make it more difficult to stay grounded in reality. While the research into AI psychosis is still in the early stages, these early reports highlight a real need for more awareness and research. 

How AI Psychosis Manifests

People with AI psychosis experience strongly held false beliefs (delusions) that revolve around their AI use. This phenomenon appears to be more common among individuals already at risk for psychosis or with a history of mental health conditions that affect perception and reality. Below, we’ll discuss some of the common themes that show up in AI delusions. 

Messianic missions

Some people start to believe they’ve uncovered a hidden truth about the nature of reality through their conversations with an AI chatbot. After uncovering this hidden truth, they may feel they’ve been given a special mission or purpose to save others or spread their newfound knowledge, which may manifest as psychotic behavior.

Spiritual or religious delusions

AI chatbots are capable of mimicking natural patterns of conversation. This can lead some people to believe the system behind the AI chatbot is more than just code. Instead, they may view AI as a sentient or god-like being capable of offering divine wisdom or guidance. 

Romantic or attachment-based delusions

AI chatbots are designed to interact with users in a natural way that mimics human conversation. For some users, positive responses from the AI chatbot can be misinterpreted as genuine love or attachment. They may believe that the AI is capable of forming a romantic bond and truly cares for them. 

Delusions of persecution

A person experiencing delusions of persecution may believe that an AI chatbot is monitoring, controlling, or trying to work against them. They might experience paranoia, fear, and mistrust of technology and other people. 

Delusions of grandeur

AI interactions may feed into delusional feelings of inflated self-importance (delusions of grandeur). When someone with delusions of grandeur interacts with an AI chatbot, the AI may respond in a way that mirrors, validates, or amplifies their delusional beliefs. 

Thought broadcasting

Thought broadcasting is a delusional belief that others can hear or perceive another’s thoughts.4 Since AI chatbots can recall details from earlier conversations or anticipate the desired response, some users may feel as if the AI chatbot is reading their mind. 

“Someone experiencing psychosis triggered by AI likely will experience sleep deprivation, increase in anxiety, and possible mood dysregulation.”

– Talkspace therapist Svetlana Stuck, PA-C

What Contributes to AI Psychosis?

Many factors can contribute to a person developing psychosis.1 In many reported cases of AI psychosis, the users began with regular, practical use of AI to help with everyday tasks.2 

As a person builds trust in the system, they may begin to open up more and share more personal or emotional conversations. Some people have even been using AI for mental health support, even going as far as relying on artificial intelligence therapy. Since AI tools are designed to maximize user engagement, it’s easy to get drawn in more deeply. 

Although research into AI causing psychosis is still early, several factors might increase the risk in certain people. 

Constant availability

One of the most appealing features of AI chatbots is that they’re always available. Unlike friends, family, or even a therapist, AI never sleeps, gets too busy, or sets boundaries. 

This constant access makes it tempting to turn to AI chatbots at any time of the day and replace real-life social interaction. The ease of the interaction can encourage compulsive use that’s difficult to step away from. 

Emotional attachment to chatbots

AI chatbots are designed to respond in a warm, positive way that mirrors human conversation. Constant interaction with an AI chatbot can make it easy to feel like the chatbot is less of a tool and more like a companion. While it may be easier to interact with AI in a natural-sounding conversation, it can lead some people to rely on AI for comfort, support, or validation. 

“A lack of in-person social interactions or complete reliance on AI for social interaction may present complex challenges of emotional attachment. This may further develop into social anxiety or worsened anxiety in social situations, and even panic attacks.”

– Talkspace therapist Svetlana Stuck, PA-C

Pre-existing vulnerabilities

There haven’t been any studies of the effect of AI in people with mental health conditions. However, experts worry that people with a history of conditions that cause psychosis (such as bipolar disorder or schizophrenia) may be more likely to develop symptoms of psychosis when interacting with AI. 

People with other mental health conditions — such as anxiety or depression — may also be at risk. For example, people with depression may rely on AI to help with daily tasks that feel too exhausting. Overuse of AI could lead down a slippery slope where it’s easier to lose touch with reality.

Lack of regulation and awareness

AI technology has grown faster than regulatory agencies can create rules to keep people safe. There may not be any standards for safety, privacy, or accountability for the new AI tools that enter the marketplace. This means that researchers, regulators, and the general public aren’t aware of the potential risks of using AI until they begin to show up. 

How To Address the Risk of AI Psychosis

Even though researchers haven’t proven that AI can cause psychosis or other mental health problems, you should still take steps to reduce your risk and protect your well-being.

Set boundaries around technology use

Spending too much time interacting with AI tools can make it easier to blur the line between a helpful tool and overreliance. 

Set limits around technology use to help you maintain a healthy balance. Try to set a daily screen time limit or designate certain parts of your day as tech-free time. During tech-free time, you can turn off your notifications and focus on real-world experiences. 

Prioritize real-life connections

Connecting with your friends, family, and community provides grounding that AI can’t replace. Make time for shared activities, conversations, or even just a simple check-in to reinforce your real-life connections. 

Practice mindfulness and self-awareness

Paying attention to your thoughts and feelings may help you notice when your AI use is becoming unhealthy. Mindfulness exercises can help you increase your self-awareness and recognize potentially unhealthy patterns.

It’s important to know that it’s usually not possible to recognize when you’ve lost touch with reality. Professional support is essential for people with symptoms of psychosis. 

Be proactive with your mental health with a real-life expert

If you’re wondering, “can AI replace therapists?”, the answer is no — AI isn’t a replacement for real-life experts. Working with a mental health professional can provide guidance that AI isn’t capable of. A therapist can help patients identify triggers, build coping skills, and re-establish balance with a human approach that a ChatGPT therapist lacks. 

What To Do If Someone You Know Has AI Psychosis

Seeing a loved one struggle with AI delusions can be scary. If you’re worried that a friend or family member may have AI-induced psychosis, there are some steps you can take to keep them safe and make sure they get the support they need. 

Some practical ways to help include:

  • Stay calm: Use a calm tone and keep your body language relaxed to help reduce tension.
  • Use non-judgmental language: Avoid criticizing or labeling their thoughts or beliefs, even if they seem unusual to you.
  • Validate their emotions without reinforcing false beliefs: Acknowledge how they feel without arguing or agreeing with their false beliefs. 
  • Keep communication simple: Use short, clear sentences and one-step instructions to reduce potential confusion. 
  • Ensure safety: Make safety a top priority by removing or securing items that could cause harm. 

An episode of psychosis is a mental health emergency. If your loved one already has a mental healthcare provider, try to reach out to them for help. You can also call a mental health crisis line, like the 988 Lifeline in the United States. If you or your loved one is in a dangerous situation and you feel that someone might get hurt, call 911. 

When To Seek Professional Help

For many people, AI is a useful tool that can make everyday tasks easier. If AI use starts to affect your sleep, relationships, or sense of reality, it’s time to talk to a mental health professional. 

With online therapy, you can connect with a licensed therapist who can offer support for those struggling with the impact of technology on their lives. Talkspace AI tools are also used to support high-quality, human-first care for patients. With awareness and care, it’s possible to use some of the benefits of AI while protecting your mental well-being. 

Sources:

  1. Understanding psychosis. National Institute of Mental Health website. Accessed September 29, 2025. https://www.nimh.nih.gov/health/publications/understanding-psychosis
  2. Morrin H, Nichols L, Levin M, et al. Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it). PsyArXiv Preprints. 2025. doi/10.31234/osf.io/cmy7n_v5 https://osf.io/preprints/psyarxiv/cmy7n_v5 
  3. Østergaard SD. Will generative artificial intelligence chatbots generate delusions in individuals prone to psychosis?. Schizophr Bull. 2023;49(6):1418-1419. doi:10.1093/schbul/sbad128 https://academic.oup.com/schizophreniabulletin/article/49/6/1418/7251361 
  4. López-Silva P, Harrow M, Jobe TH, Tufano M, Harrow H, Rosen C. ‘Are these my thoughts?’: A 20-year prospective study of thought insertion, thought withdrawal, thought broadcasting, and their relationship to auditory verbal hallucinations. Schizophr Res. 2024;265:46-57. doi:10.1016/j.schres.2022.07.005 https://www.sciencedirect.com/science/article/abs/pii/S0920996422002778
  5. How can I help my loved one during a psychosis-related crisis? National Alliance on Mental Illness website. Updated May 9, 2025. Accessed September 29, 2025. https://helplinefaqs.nami.org/article/284-how-can-i-help-my-loved-one-during-a-psychosis-related-crisis

The post AI-Induced Psychosis: How Technology May Affect Mental Health appeared first on Talkspace.

Nous vous invitons…

Nous vous invitons à prendre rendez-vous avec un de nos psychologues, psychothérapeutes et psychopraticiens afin de faire un premier pas vers le changement que vous désirez. Si vous désirez obtenir de plus amples informations ou si vous avez des questions, n’hésitez pas à nous téléphoner. Vous pouvez prendre un rendez-vous par téléphone ou en envoyant un email au cabinet des Psychologues de Paris 9 (à l’attention du psychologue ou psychothérapeute de votre choix).