A note about AI: On the Talkspace blog we aim to provide trustworthy coverage of all the mental health topics people might be curious about, by delivering science-backed, clinician-reviewed information. Our articles on artificial intelligence (AI) and how this emerging technology may intersect with mental health and healthcare are designed to educate and add insights to this cultural conversation. We believe that therapy, at its core, is focused around the therapeutic connection between human therapists and our members. At Talkspace we only use ethical and responsible AI tools that are developed in partnership with our human clinicians. These tools aren’t designed to replace qualified therapists, but to enhance their ability to keep delivering high-quality care. To learn more, visit our AI-supported therapy page.
Artificial intelligence (AI) tools, like ChatGPT, are becoming part of our daily lives. Many people use ChatGPT as an assistant to help brainstorm ideas, draft emails, or answer questions quickly. The fast and helpful responses can feel magical, making it tempting to blindly trust the information. What happens when ChatGPT gives you an answer that’s completely wrong?
The algorithms that power ChatGPT can sometimes produce false or misleading information that sounds convincing. This is called an AI hallucination. ChatGPT hallucinations can be dangerous if they influence your beliefs, emotions, or decisions. Continue reading to learn more about what causes ChatGPT hallucinations, why they’re harmful, and how to protect your mental well-being while using AI tools, like ChatGPT.
What Are AI “Hallucinations”?
AI “hallucinations” happen when a ChatGPT bot produces information that sounds plausible, but it’s actually false, misleading, or unverifiable.
Experts categorize hallucinations into two different types:
- Intrinsic hallucinations: When the response generated by ChatGPT misrepresents or distorts information. For example, a chatbot may summarize a medical study in a way that changes its meaning or significance.
- Extrinsic hallucinations: When ChatGPT generates information that isn’t backed up by real-world facts. For example, ChatGPT might invent a research study or statistic that doesn’t exist. An extrinsic hallucination doesn’t necessarily have to be wrong. However, the information isn’t verifiable from any known source or reference.
It can be difficult to spot a ChatGPT hallucination example because of how the response is delivered. ChatGPT hallucinations aren’t simple mistakes, like a typo. It’s an answer that seems polished, confident, and authoritative. We’re conditioned to think information is more persuasive when it’s written in this tone. Understanding what a ChatGPT hallucination is and knowing they can happen even when it sounds trustworthy is important to keep yourself safe when using this new technology.
“AI hallucinations can feel convincing because they exploit our brain’s tendency to trust confident, fluent, and human-like language—even when the content is entirely false.”
– Talkspace therapist Cynthia Catchings, PhD, LCSW-S
What Causes ChatGPT to Hallucinate?
When you ask ChatGPT a question, the bot doesn’t think or fact-check the way a person does. Instead, it generates a response by predicting the most likely answer based on patterns it learned from being fed large amounts of text. While this process can often produce accurate answers, it’s not perfect and can lead to mistakes.
These mistakes can happen for a number of different reasons. ChatGPT is trained on both reliable and unreliable information. That means that errors in the information it was trained on can also show up in the answers. Another reason for ChatGPT hallucinations is that when it doesn’t know an answer, it may “guess,” resulting in answers that sound real, but aren’t. ChatGPT prioritizes a natural-sounding response, even when the information isn’t accurate.
How Often Does ChatGPT Hallucinate?
ChatGPT can be impressive, but it makes mistakes more than you may realize. Tests done by OpenAI (the creator of ChatGPT) found that the ChatGPT hallucination rate was between 33% and 79%, depending on the type of test and the model used. OpenAI says the newer models hallucinate less often, but no AI model is completely free of this problem.
When AI Hallucinations Become Dangerous
In some cases, you might be able to brush off a ChatGPT hallucination as a quirky mistake. However, when you begin to rely on ChatGPT for more serious information, hallucinations can have a bigger effect. False or misleading answers can affect decisions, health, and how you think.
Misinformation and poor decisions
If you assume ChatGPT is always reliable, you may use it to guide major decisions that affect your life.
For example, when using ChatGPT for health advice, acting on a hallucination could lead you to take actions that worsen your illness or delay you getting the treatment you need.
Asking ChatGPT about how to invest your money or file your taxes could result in incorrect information that directly impacts your finances.
Using ChatGPT in legal scenarios can also have major implications. There have already been cases where AI hallucinations have shown up in legal cases by citing cases that don’t exist. If you’re relying on this information to defend your case in court, there could be serious consequences.
Mental health impacts
ChatGPT hallucinations can also affect your mental health. If you get an answer from a chatbot that’s unexpected or contradicts what you know, it can cause anxiety or confusion. You might find yourself questioning your own memory or judgment, wondering if you’re the one who’s misunderstanding some of the information provided.
“People may recognize that AI use is starting to negatively impact their thinking or emotional state when they feel anxious, dependent, or detached from real-life interactions, or when they start trusting AI responses over their own intuition or human relationships.”
– Talkspace therapist Cynthia Catchings, PhD, LCSW-S
There have also been emerging reports of AI-induced psychosis where reliance on AI tools contributes to losing touch with reality. Since AI is still new, researchers don’t know the full extent to which this new technology might affect mental health.
Overreliance and loss of critical thinking
When you start using ChatGPT and start getting confident, polished responses, it’s easy to rely on this information more and more. You may have even started by double-checking the system’s responses for accuracy. If you’ve found the information to be accurate in the past, you may stop fact-checking or questioning the responses in the future. Over time, this can weaken your own critical thinking and decision-making skills.
Dependency on AI for reassurance
If you’ve found ChatGPT to be helpful in completing your daily tasks, you may also wonder about its ability to provide reassurance and emotional support. Some people are turning to AI for mental health support, even going as far as using ChatGPT as a therapist.
Depending on ChatGPT to help validate your feelings or guide personal decisions may reduce your confidence in your own judgment and that of your loved ones whom you normally confide in. It’s important to remember that while ChatGPT can be a helpful sounding board, it’s not a replacement for real human connection.
Who Is Most at Risk?
Anyone who uses ChatGPT may encounter a hallucination. If your prompt or the question you asked is low-stakes, a ChatGPT hallucination may present an inconvenience. However, in high-stakes situations that involve your health or safety, a hallucination can pose a big risk.
People who spend significant time engaging with ChatGPT may have a higher risk of encountering a hallucination that causes harm. For example, people who turn to ChatGPT for companionship or emotional validation may be at risk for more harm from a hallucination.
Another group of people who may be more vulnerable to harm from ChatGPT hallucinations is those with preexisting mental health conditions, like anxiety or psychosis. Someone who already has a mental health condition may become confused or distressed by an incorrect or misleading AI hallucination.
How To Use ChatGPT Responsibly & Avoid Risks
ChatGPT can be a helpful tool when it’s used responsibly. It’s important to remember that no matter how advanced the technology is, it’s not perfect, and it can make mistakes. That doesn’t mean you shouldn’t use ChatGPT, but you should be aware of some ways to help reduce your risk of harm from ChatGPT hallucinations.
Verify information
The best way to stay safe when using ChatGPT is to double-check the information it gives you. Even when the answers sound confident, be aware that they could be incomplete, misleading, or outright wrong in some cases.
Cross-check the answer with credible sources before making any decisions. Look for consistency with trusted professional advice. For financial guidance, consult official government resources or a licensed financial advisor. If you have legal questions, your best source is a qualified attorney.
To help verify health information, you can check with trusted health sources, such as those from large hospitals or the National Institutes of Health. Even after checking with trusted sources, it’s important to talk to a healthcare provider if you have any concerns.
It’s important to understand that extrinsic AI hallucinations may not be verifiable. If you can’t find information to back up an answer from ChatGPT, it may be a hallucination.
Set boundaries on use
Spending too much time using ChatGPT can put you at risk of harm from a ChatGPT hallucination. To help avoid this, limit prolonged sessions and try to avoid late-night sessions where fatigue can make it harder to think critically.
It’s also important to pay attention to how your interactions with ChatGPT affect your mood and how you think. If you notice that you’re feeling more anxiety, self-doubt, or frustration, it may be a sign that you need to take a step back from your ChatGPT use.
Treat AI as a supplement, not a substitute
ChatGPT is a great tool when it’s used for things like brainstorming or as a support to your work. However, the algorithms that power ChatGPT and similar AI models aren’t a replacement for real human expertise shaped over years of experience. If you’re planning on making important decisions, make sure to keep human oversight central to your decision-making process.
Leaning on Humans for Support
ChatGPT and other types of AI can be helpful tools, but they’re not a substitute for real human guidance. If you ever feel like your use of ChatGPT is leaving you anxious, confused, or unable to trust your own judgment, it may be a sign to reach out for professional guidance. A licensed therapist can give you coping strategies, emotional support, and a human perspective that ChatGPT simply can’t provide.
When it comes to the use of AI at Talkspace, these tools are used to enhance human-first care, not replace it. With online therapy, you can connect with a licensed therapist who can help you navigate the challenges of daily life and prioritize your well-being. With Talkspace, you can turn to a real person using the best AI tools for therapists in tandem with their own techniques to ensure that your mental health stays grounded in human expertise.
Sources:
- Berk H. Beware of Artificial Intelligence hallucinations or should we call confabulation?. Acta Orthop Traumatol Turc. 2024;58(1):1-3. doi:10.5152/j.aott.2024.130224 https://pmc.ncbi.nlm.nih.gov/articles/PMC11059964/
- Ji Z, Lee N, Frieske R, et al. Survey of hallucination in natural language generation. ACM Comput. Surv. 2023;55(12):1-38. doi:10.1145/3571730 https://dl.acm.org/doi/10.1145/3571730
- OpenAI o3 and o4-mini System Card. OpenAI. Published April 16, 2025. Accessed September 30, 2025. https://cdn.openai.com/pdf/2221c875-02dc-4789-800b-e7758f3722c1/o3-and-o4-mini-system-card.pdf
- Morrin H, Nichols L, Levin M, et al. Delusions by design? How everyday AIs might be fuelling psychosis (and what can be done about it). PsyArXiv Preprints. 2025. doi/10.31234/osf.io/cmy7n_v5 https://osf.io/preprints/psyarxiv/cmy7n_v5
- Kosmyna N, Hauptmann E, Yuan YT, et al. Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. ArXiv. 2025. doi:10.48550/arXiv.2506.08872 https://arxiv.org/abs/2506.08872
The post The Dangers of ChatGPT Hallucinations appeared first on Talkspace.