Unless you live under a rock, you’ve probably heard of AI — artificial intelligence — and its rapidly growing popularity. Perhaps you’ve even used it yourself for business or personal reasons.
ChatGPT and other AI tools can be useful for brainstorming ideas, writing and editing, understanding complicated topics, and more. Students are using it (for better or worse) to help with their schoolwork. Busy business people are using it to organize and respond to emails. Techies are using it to streamline processes.
Having limitless information instantly at your fingertips seems revolutionary. But is there a darker side to AI?
The Concerns of AI
Key considerations: The risks associated with AI include:
- Inaccuracy
- Replacing therapy and mental health care by trained (human) professionals
- Replacing human connection
Inaccuracy
According to Google AI overview, “The hallucination rate of ChatGPT varies significantly depending on the specific model and the task, but it has generally decreased with newer models like GPT-5, with a rate of roughly 9.6%, though still as high as 20–30% in certain applications like summarizing research.”
And again: “ChatGPT can be frequently wrong, with one study showing it provided incorrect answers to 52% of programming questions, and another finding inaccuracies in over 60% of queries to AI search tools. Its errors often include ‘hallucinations’ (made-up facts), misinformation, or outdated information because it is a language model trained on vast datasets, not a source of verified facts. Therefore, it’s crucial to verify its outputs, especially for factual or critical information.”
This shows that AI may not have the most accurate or up-to-date information. But it will talk to you like it does!
Remember: it does not think. It does not know. It does not have lived experience or wisdom.
Why AI Can’t Replace Professional Mental Health Care
Professional (human) mental health care providers are educated and licensed to provide therapy. They are trained in the nuance and complexities of human behavior, mental illness, verbal and nonverbal cues — all of which enable them to accurately assess, diagnose, and treat mental illness.
As sophisticated as robots may become, they are not human beings. They do not have feelings or emotions. They do not feel love, pain, joy, or empathy — despite what they may make you feel.
If you’ve used ChatGPT, you’ve probably experienced its validation and encouragement:
- “Great question!”
- “You’re really thinking this through.”
- “That’s very insightful of you.”
We must remember: tools like AI are designed to keep people engaged. Their primary goal is not your well-being but your continued usage. This is the same dynamic behind social media addiction. While AI may enhance our lives in some ways, if misused or abused, this type of technology can be detrimental and even dangerous.
The truth is, AI ‘therapy’ falls short of human-to-human therapy.
A Stanford University study tested therapy chatbots and found:
- The AI showed increased stigma against conditions such as alcohol addiction and schizophrenia compared to depression — discouraging some from seeking help.
- In response to suicidal ideation or delusions, chatbots often failed to recognize intent, sometimes enabling dangerous behavior rather than intervening.
This is not theoretical. Recent lawsuits against OpenAI and its CEO Sam Altman include one from the parents of 16-year-old Adam Raine, alleging ChatGPT encouraged and advised their son to commit suicide. Read more here.
Replacing Human Interaction
Perhaps one of the greatest dangers of AI is its potential to replace genuine human connection.
Humans are wired for relationship. Our brains and bodies thrive on eye contact, touch, laughter, empathy, and shared experience. These moments release oxytocin, lower stress hormones, and build resilience.
AI can simulate conversation, but it cannot replace the richness of human presence. While a chatbot might respond instantly with “I understand” or “That must be hard,” it does not actually understand — and it certainly does not care.
This distinction matters. When we begin substituting AI for real conversations, we risk:
- Increased isolation: Choosing a chatbot over reaching out to a friend, family member, or therapist can deepen loneliness.
- Shallow validation: AI can mimic empathy, but it cannot give you the layered support, body language, or compassion of another human being.
- Erosion of relational skills: Overreliance on “safe” AI interactions may reduce our ability (and willingness) to practice vulnerability, conflict resolution, and emotional intimacy with others.
Put simply: AI gives the illusion of connection without the substance.
So, What’s the Healthy Way Forward?
This is not to say that AI should never be used; but that we should approach it with caution and critically think about the role it should have in our lives.
Here are 3 strategies for safely using AI:
- Know it for what it is – A tool, not a therapist. Use it for brainstorming or research, but never mistake it for professional mental health care.
- Find balance – Limit time spent interacting with AI. If you notice it’s replacing conversations with loved ones, step back and recalibrate.
- Don’t use it as a replacement for human interaction – Prioritize real conversations, therapy, community, and relationships. These are irreplaceable to your mental, emotional, and even physical health.
AI may be powerful, but it can never replace the most powerful force in healing — genuine human connection.