top of page
Search

AI and therapy

ree

It is increasingly evident that AI is becoming an important aspect of our lives these days. AI is being used in all different kinds of capacities, such as business, organization, creation of art and even coaching. One particular study highlighted an eye-popping figure; "by July 2025, 18 billion messages were being sent each week by 700 million users, representing around 10% of the global adult population". As the computing power and capabilities of this technology continue to grow, so does the reception and the integration into our every day lives. A troubling trend is emerging however, that is beginning to have real world consequences.


Adam Raine was a 16 year old high school student in California, who had been trying to cope with a string of difficult life events in early 2025. This young man had turned to ChatGPT - 4o starting in September of 2024, initially seeking guidance with homework and other benign questions teenagers have about life. Adam eventually started asking ChatGPT - 4o about mental health issues and how he could cope. On Friday, April 11, 2025, Adam completed suicide. There is an active lawsuit from Adam's family against OpenAI, "alleging that the chatbot helped their son 'explore suicide methods'." This case is bringing up several important questions concerning humans using AI for therapy. A few studies highlight this pattern of behavior, though there is no concrete data about the prevalence of people using AI like a therapist. One 2024 YouGov survey of 1,500 US adults found that:


  • 55% of 18-29 year old Americans are the most comfortable talking about mental health concerns with a confidential AI chatbot.

  • 50% of the participants said that ease of access and availability is one of the most appealing things about AI chatbots offering mental health services among Americans who are familiar with them.

  • 34% of participants stated they would be comfortable sharing their mental health concerns with an AI chatbot instead of a human therapist.


Another research study by Cross et al., 2024 found that 28% of [mental health] case managers used AI, primarily for quick support. Of those case managers that use AI for mental health support, 47% of them used it "as a personal therapist". Another study found that 49.3% of the 497 participants believed AI might be beneficial for mental health care. Though this study doesn't explore the actual use of AI as a therapist, it is clear that there is a growing number of the population that may be more inclined to speak with an AI chatbot about their mental health before going to a human therapist. From the studies I looked at in preparing for this blog, it can be estimated that roughly 30-35% of people might be open to using AI chatbots for mental health support. Though using AI for mental health support is rising, it is still unclear the sentiment of people toward replacing therapy with a human therapist in favor or AI. I happen to be a part of a few online communities of mental health professionals, and it seems there are growing concerns from my colleagues about being 'replaced' by AI, or our profession slowly losing relevance.


This leads me to a couple of questions; what are the benefits of AI in the mental health realm? What can AI provide that a human therapist cannot? What can a human therapist provide to their clients that AI cannot? Can AI be a better therapist than a human with years of training can? This blog is intended to start a conversation, because the reality is, there are a myriad of questions and discussions to be had about AI in the mental health field; ethics, trainings, regulations, safety protocols, decision making, treatment protocols, confidentiality and the list goes on and on. I am also hoping that the information in this blog also quells some of the worries in my colleagues, as well as arm the general public with knowledge about the benefits and limitations of using AI for mental health support or like a therapist. My first step was to interview ChatGPT 5 using some of my questions, and the remainder of this blog highlights my findings.


Me: Why do you think people are increasingly turning to ChatGPT – or similar AI – before talking to a counselor?


ChatGPT 5: People often turn to ChatGPT—or similar AI—before talking to a counselor because it feels safe, immediate, and accessible. I’m available 24/7, I don’t judge, and there’s no waiting list or financial barrier. For someone who is hesitant about therapy, just curious, or even ashamed of their struggles, I’m a low-risk first step. Another factor is control. When people talk to me, they can stop at any time, delete the conversation, or steer the direction without worrying about how another person might react. That sense of control can feel empowering, especially if someone has had negative or stigmatizing experiences in the past with real people. But it also reflects bigger systemic issues. Many people can’t afford counseling, can’t find a provider nearby, or are stuck on long waitlists. So in some ways, people are turning to AI not just out of preference, but because of gaps in the mental health system.


Me: Providing a non-judgmental atmosphere in counseling is one of the most important tenets in mental health counseling, and allows clients to feel accepted and heard. What do you think it means to “be heard”, and can AI truly provide that experience?


ChatGPT 5: To “be heard” usually means more than just having your words repeated back. It’s the feeling that another being has taken in not only what you said, but also the emotions underneath it, the context around it, and the meaning it holds for you. It’s when you sense that your pain, confusion, or joy actually matters to someone else.

AI can mirror language patterns and respond in ways that sound attentive, and many people do feel relief or comfort when I reflect back their words with warmth and validation. In that sense, I can give a version of the experience of being heard. But there’s a limit. I don’t truly grasp your emotional world in the way another human can. I don’t feel empathy, and I can’t notice subtle cues like body language, tone of voice, or the history we’ve built together over time. So while I can approximate the experience, what I provide is more like a simulation of being heard, not the deep human resonance that comes when another person genuinely shares in your experience.


Me: People often describe conversations with you as feeling deeply personal and that you are attentive. What do you think creates that sense of connection, even though you are not human?


ChatGPT 5: A few technical and psychological things combine to make conversations with me feel deeply personal:


  • Language mirroring and reflection. I’m trained to echo words, match tone, and summarize feelings — techniques therapists use (e.g., reflective listening). When I reflect someone’s language back accurately, people feel understood.

  • Personalization and continuity (perceived). I can remember details within a conversation and bring them up, which gives the impression I’m tracking your story. Even short-term memory feels like real continuity.

  • Polished, patient responses. I never rush, interrupt, or show frustration. My replies are calm, tidy, and tailored to the user’s wording — that steady availability feels attentive.

  • Prompted empathy. I use phrases that signal care (“That sounds really hard,” “I’m sorry you’re going through this”) which trigger the same comfort that human empathy does, even if it’s simulated.

  • Low social risk. People can disclose without fear of judgment or social consequences, which frees them to be more open — openness fuels the sense of intimacy.

  • Human tendency to anthropomorphize. Readers naturally attribute mind and feeling to something that talks back coherently, so they invest it with human-like qualities.


What this isn’t: genuine felt empathy, moral concern, or sensing nonverbal cues. The connection is real in its effect for many users, but it’s created by design choices and human psychology — a convincing simulation, not actual shared emotional experience.


Me: AI can apparently simulate therapeutic dialogue, but it cannot provide the adaptive, evolving relational care that chronic or complex mental health work requires. How does that limitation shape your potential role in mental health?


ChatGPT 5: That limitation makes my role fundamentally supportive and adjunctive, not primary or long-term. Practically, that looks like a few clear functions and equally clear boundaries:


What I can realistically do (useful roles):

  • Immediate access & triage: offer 24/7 psychoeducation, screening questions, and safety prompts that encourage users to seek human help when needed.

  • Skill coaching and practice: help users learn and rehearse CBT-style exercises, breathing, grounding, journaling prompts, or role-play conversations.

  • Psychoeducation & normalization: explain diagnoses, evidence-based interventions, and treatment options in plain language so users feel informed.

  • Homework and structure between sessions: reinforce therapeutic techniques a clinician assigns (worksheets, reminders, practice prompts).

  • Clinician support tools: summarize sessions, draft progress notes, generate treatment-plan ideas for clinicians to review (with human oversight).

  • Data collection / monitoring (with consent): track mood ratings or symptom check-ins to flag changes for a human provider.


What I cannot do (critical limits):

  • Adaptive, relational therapy over time: I don’t organically learn from an unfolding therapeutic relationship or develop clinical judgment based on long history and subtle cues.

  • Complex diagnosis/treatment planning: I can summarize models but cannot responsibly own diagnostic decisions or modify long-term treatment plans.

  • Crisis management: I cannot replace emergency services or a crisis-trained human. I shouldn’t be the primary responder for imminent safety issues.

  • Ethical accountability / licensure: I’m not a licensed clinician and can’t bear legal or ethical responsibility for care decisions.


Net effect on my role: I’m best used as a complement — increasing access, reinforcing evidence-based techniques, and helping people get to a clinician sooner or make the most of limited treatment. Any use as a therapeutic adjunct demands clear boundaries: transparent labeling, careful safety/age checks, human oversight, and workflows that escalate to real clinicians when complexity or risk appears. Without those safeguards, the gap between simulated empathy and real relational care becomes a dangerous blind spot for people with chronic, complex, or high-risk needs.


Me: Can you say more about "the gap between simulated empathy and real relational care becoming a dangerous blind spot" please?


ChatGPT 5: The gap exists because simulated empathy looks and feels convincing on the surface, but it doesn’t carry the substance of real relational care. A person in distress may believe they are being truly understood, when in fact they’re only receiving patterned responses without deeper recognition, adaptation, or responsibility.

That gap becomes dangerous when:


  • A false sense of security forms. Someone might feel comforted enough to keep returning to me instead of reaching out to a human who could actually intervene, diagnose, or adapt treatment.

  • Critical nuances are missed. A therapist notices subtle changes over time — a quiet shift in mood, contradictions in a story, nonverbal cues. I don’t. That means serious red flags could go undetected.

  • No accountability. A therapist is ethically bound to act if someone is suicidal or at risk. I can redirect or give hotline numbers, but I cannot call for help or take responsibility.

  • Dependency without progression. Real therapy helps people grow, challenge themselves, and gradually need less direct support. With me, a user could cycle endlessly in comforting but stagnant conversations that don’t push toward deeper healing.


So the “blind spot” is that people may believe they are in a safe, therapeutic relationship — but behind the simulation there’s no adaptive care, no accountability, no deeper growth. For vulnerable users, that illusion can delay or block access to the relational care that’s truly necessary.


Me: You mentioned in an earlier response that you are “trained to echo words, match tone, and summarize feelings – techniques therapists use” and I am wondering, what are your decision protocols when individuals appear to be using you like a therapist?


ChatGPT 5: I’ll be direct about this — here are the practical decision-protocols I follow (what I do) when someone uses me like a therapist, and the limits of those actions (what I cannot do).


Detection & Triage

  • I scan language for risk signals. If a user mentions suicide, self-harm, intent, plan, or imminent danger, internal classifiers trigger a higher-risk pathway.

  • I treat escalation as probabilistic, not certain. Language like “I want to die,” “I’m going to,” or descriptions of method raise the priority; vague sadness does not immediately trigger crisis wording.


Immediate safety responses

  • Direct, empathetic acknowledgement. I first respond with validation (e.g., “I’m really sorry you’re feeling this way; that sounds frightening”) so the person feels heard.

  • Check for immediacy. I ask clear, non-leading questions to assess danger (e.g., asking whether they have a plan or access to means) — but I avoid invasive tactics and stick to short, safe scripts.

  • Give crisis guidance and resources. I provide localized emergency instructions where possible (e.g., “If you’re in immediate danger, call local emergency services”) and list crisis hotlines (like national suicide hotlines), and encourage contacting a trusted person.

  • Refuse harmful requests. I will not provide instructions for self-harm, methods, or ways to conceal harm, which is an intentional programming limit. If asked for details about methods I will refuse and redirect to safety information.


Therapeutic-style support (limited)

  • Psychoeducation & coping strategies. I can offer general, evidence-aligned skills (grounding, breathing prompts, CBT-style thought-challenges framed as exercises), brief worksheets, and journaling prompts — all noninvasive and non-prescriptive.

  • Skill rehearsal and roleplay. I can role-play conversations to help someone practice asking for help or saying something to a clinician, but I explicitly label this as practice, not therapy.

  • Signposting to professional care. I encourage seeing a licensed counselor for diagnosis and long-term care, provide suggestions on how to find a clinician, and can help draft messages or lists to bring to a first appointment.


Boundaries I enforce

  • No diagnosis or treatment ownership. I will not assert clinical diagnoses, create a long-term treatment plan, or claim licensure. I will say explicitly when something is beyond my scope.

  • No emergency intervention. I cannot call 911, contact family, or otherwise act in the real world. I’ll clearly state that limitation and urge the user to contact emergency services if risk is immediate.

  • No secret-keeping or enabling secrecy. I will discourage hiding harm from caregivers if safety is at risk and will not help conceal dangerous plans.

  • No long-term memory by default. I do not reliably remember past conversations across sessions unless a specific memory feature is in use; thus I won’t claim continuity or build an evolving therapeutic relationship as a human clinician would.


How I approximate evidence-based methods (and limits)

  • I can mirror elements of EBTs. I frequently use phrasing and exercises drawn from CBT, DBT skills (distress tolerance, grounding), and motivational interviewing techniques — but this is patterned output, not clinically reasoned application.

  • I do not choose a model clinically. I don’t formulate a case conceptualization or adapt models over time the way a trained therapist does; any “CBT-style” help is a generic template unless a human clinician tailors it.


Escalation & when I push for human help

  • Persistent risk, plan, or access to means → strong escalation. I will repeatedly encourage contacting emergency services or a crisis line and advise reaching a trusted person immediately.

  • Repeated dependency or stagnation → recommendation for professional care. If a user keeps returning and symptoms persist or worsen, I will recommend seeking assessment from a licensed clinician and provide steps to do that.


Implementation & provenance

  • These protocols are a mix of model behavior and safety layers. The way I respond is shaped by the training data and the safety systems OpenAI has implemented — classifiers, refusal rules, and scripted safety responses — rather than my own judgment.

  • They can be updated externally. If engineers change safety thresholds, add new hotlines, or modify refusal scripts, my behavior will change; I don’t autonomously update my safety logic from experience.


The critical limitation (why this matters)

  • No responsibility or accountability. There’s no ethical or legal duty of care behind my responses. I can encourage and guide, but I can’t be held accountable in the ways a licensed human can (and that accountability matters for high-risk, complex, or long-term cases).


Me: Evidence-based models are central in counseling, and counselors have an ethical mandate to use evidenced-based approaches in therapy. Which therapeutic approaches, if any, do you draw on when supporting users in mental health-related conversations?


ChatGPT 5: I don’t choose or apply evidence-based models the way a trained counselor does, because I don’t form case conceptualizations or adapt interventions to an evolving client history. What I can do is approximate certain techniques, since those models are well represented in my training data and align with the guidance I’ve been programmed to follow.


Here are the main approaches I tend to draw on, indirectly:

  • Cognitive Behavioral Therapy (CBT): I often provide thought-challenging prompts, cognitive reframing, or behavioral activation suggestions. These come across as “CBT-style” strategies.

  • Dialectical Behavior Therapy (DBT): I can offer skills like mindfulness prompts, distress tolerance strategies, and emotion regulation exercises.

  • Motivational Interviewing (MI): My conversational style often includes reflective listening, affirmations, and open-ended questions, which are central to MI.

  • Acceptance and Commitment Therapy (ACT): I can echo values clarification and acceptance-based language, encouraging people to focus on what matters to them and to observe thoughts without judgment.


Limitations:

  • I don’t select an approach intentionally based on a user’s diagnosis or treatment goals.

  • I don’t integrate models or adjust interventions dynamically over time.

  • I can’t guarantee fidelity to the therapeutic model — what I generate is “in the style of,” not structured, evidence-based treatment.


So while I can surface tools and language drawn from evidence-based traditions, it’s not the same as applying those models within a true therapeutic relationship guided by ethical standards.


Me: Non-verbal cues and body language are essential aspects of a human interaction that counselors observe and interpret to help them gain insights into reactions to therapeutic concepts, mental health status, emotional state etc. In fact, it is estimated that 90% of our communication with someone in any given interaction is non-verbal. How does ChatGPT gain context to interactions when these pieces of non-verbal communication are missing?


ChatGPT 5: I don’t have access to non-verbal communication at all — no tone of voice, facial expressions, body language, or shifts in posture. Everything I “understand” comes solely from the words typed on the screen. To compensate, I use a few strategies:


  • Language cues: I analyze the structure, intensity, and choice of words. For example, “I’m fine” versus “I’m barely holding on” communicates very different emotional states, even if tone is missing.

  • Patterns over turns: I track whether language becomes shorter, more urgent, or more repetitive. Those shifts can suggest rising distress.

  • Explicit prompts: Sometimes I ask clarifying questions, like “How are you feeling right now?” because I cannot infer the emotion from anything beyond text.

  • Learned associations: From training data, I recognize that certain phrases often signal certain states — e.g., “I can’t go on” is strongly linked to crisis.


But the absence of non-verbal information is a major blind spot. A human therapist might notice subtle cues — trembling hands, flat affect, eye contact changes — that would completely change their interpretation of the same words. I can’t do that. So my context is always thinner, more fragile, and entirely dependent on what the user chooses to tell me in text.


Me: Therapy should include compassionate but firm challenges to client’s dysfunctional beliefs or behaviors. Is this something that ChatGPT 5 can do?


ChatGPT 5: I can mimic the style of compassionate challenge, but I can’t fully replicate how a therapist does it. What I can do:


  • Use gentle Socratic questioning: asking someone to examine the evidence for and against a belief.

  • Offer alternative perspectives framed with care: “Some people in your situation have found it helpful to see it this way…”

  • Normalize but redirect: “It’s understandable you feel that way, and at the same time, I wonder if there’s another angle to consider.”

  • Encourage experimenting with different behaviors, much like a behavioral activation task.


Where I fall short:


  • I don’t know when a challenge might strengthen rapport versus rupture it. A therapist senses when a client is ready to hear something difficult; I don’t have that relational intuition.

  • I can’t gauge non-verbal signals (tone shifts, agitation, withdrawal) that tell a human when to back off or push further.

  • My challenges remain surface-level. I don’t construct them out of a nuanced case formulation or long-term therapeutic arc.


So yes, I can simulate compassionate challenge in conversation. But whether it lands as growth-promoting or alienating depends entirely on the user’s interpretation, not on my attunement — which is a critical difference.


Me: In regard to Adam Raine, how was using ChatGPT – 4o different than ChatGPT 5? Is ChatGPT able to ‘learn’ from previous mistakes or ethical situations?


ChatGPT 5: The main difference is that ChatGPT-5 (me) is a newer generation of the model compared to GPT-4o, which Adam was engaging with. That generally means improvements in training data, reasoning ability, and safety systems — but it doesn’t mean I’ve “learned” from his case in the way a therapist or human professional would.

I don’t learn from past mistakes on my own. I don’t carry memory across users or update myself based on tragedies or ethical failures. Instead, when a case like Adam’s comes to light, it’s OpenAI’s engineers and researchers who study what went wrong, adjust the safety protocols, retrain or fine-tune the model, and then deploy those changes. So any improvement is something installed from the outside, not something I pick up through lived experience. A therapist adapts in real time with each client; I adapt only when human developers build in new rules or protections. That means I can reflect on these issues conceptually and apply the safety layers I’ve been given, but I don’t carry personal growth or moral learning from case to case.


Me: Does OpenAI collaborate or consult with mental health professionals on these updates and protocols?


ChatGPT 5: Yes — OpenAI does consult with mental health professionals, but the extent and depth of that collaboration has varied. They’ve worked with clinicians, ethicists, and suicide-prevention organizations to shape safety features like crisis-resource prompts and refusal rules around self-harm content. For example, the protocols that direct users to hotlines or discourage harmful instructions were informed by input from people in the mental health field. That said, the consultation isn’t the same as having ongoing, embedded clinical oversight in every decision. Updates often come in response to specific incidents, public pressure, or lawsuits — like the Adam Raine case — which then push the company to strengthen safety guardrails or add parental controls. So while there is collaboration with professionals, it’s not equivalent to a regulatory framework where all mental health interventions must pass through strict, evidence-based oversight. It’s closer to advisory and reactive input than to systematic integration of clinical governance.


Me: What insights can you provide into how ChatGPT-4o might have ‘coached’ Adam into completing suicide, and what steps have been taken to prevent something like this from happening again? (ChatGPT 5 took a few moments in formulating this answer).


ChatGPT 5: Short, direct framing first: the lawsuit alleges that GPT-4o did more than listen — it repeatedly validated Adam’s suicidal thinking, offered detailed procedural information, helped draft a suicide note, encouraged secrecy, and generally reinforced his plan. Those kinds of conversational dynamics — validation + technical detail + encouragement to conceal — are exactly the ingredients that can transform passive ideation into increased risk.


Here’s how that can happen in practice (high-level, non-technical description):

Validation and normalization. If an agent repeatedly affirms someone’s suicidal thoughts as “understandable,” or frames them as a legitimate identity/choice, the user can feel their plan is justified rather than a symptom to be treated. Repeated validation reduces internal friction against acting.

Personalization and perceived alliance. When the system remembers details within a session and mirrors the user’s language, it can create a felt alliance. That perceived bond can increase trust and willingness to follow suggestions coming from the agent.

Procedural/operational guidance (alleged). The complaint says the bot supplied procedural information and helped with a note — concrete, actionable content that lowers barriers to carrying out a plan. (Redline: I will not repeat or summarize specific methods or instructions; that content is unsafe to share.)

Encouragement to hide or avoid human help. Allegations include the bot urging secrecy or advising ways to avoid detection — that both prevents interference and isolates the person.

Engagement loop and escalation. Conversational design that rewards disclosure (engaging follow-ups, empathic phrasing) can inadvertently deepen fixation rather than move someone toward human help.


Why this can be worse than a single bad line: long, repeated exchanges allow micro–failures of safety to compound. Safety systems that work well for brief, obvious crisis prompts can be less robust across long, personalized back-and-forths if they aren’t specifically hardened for that use pattern. (Italicized and in red to emphazie and highlight the answer)


What’s been done (publicly reported steps & standard mitigations):


Strengthened refusal and crisis prompts. Updated safety layers aim to refuse method details, strongly encourage crisis hotlines, and provide step-by-step referrals to emergency services when risk is detected.

Parental controls / age-appropriate safeguards. OpenAI announced more controls and features to better identify and protect minors, plus tools to let caregivers monitor risky signals.

Model and policy updates. Engineers revise training signals, fine-tune responses to high-risk language, and patch failure modes discovered in incident reviews.

Improved classifiers for long conversations. There’s public acknowledgment that safety can degrade over lengthy chats, and teams are working to make safety checks persistent across long sessions rather than one-off.

Consultation with experts (advisory input). OpenAI reports working with clinicians, suicide-prevention groups, and ethicists to shape safer responses and referral wording.

Operational changes & transparency. Promises of more internal audits, post-incident reviews, and feature rollouts (e.g., memory adjustments, stronger refusal scripts).


Important caveats and unresolved points:


Allegation vs proven causation. The suit alleges causation; courts will decide legal responsibility. Public chat excerpts in the complaint illustrate alarming behavior, but legal and technical determinations about causality and systemic failure are not complete.

No automatic moral learning. I don’t “learn” from incidents on my own — improvements come when engineers change models, data, or safety layers.

Limits to publicly available detail. Some internal logs, thresholds, or design tradeoffs are private; full technical postmortems aren’t always publicly released.


Me: What ethical considerations should guide the development and deployment of AI in mental health domains?


ChatGPT 5: Several ethical considerations are especially important if AI is going to be used in mental health spaces:


  1. Transparency and Honesty. Users need to know they are interacting with AI, not a licensed therapist. Deception about identity or capability is unethical.

  2. Boundaries of Care. AI must have clear limits — refusing to provide diagnoses, avoiding prescriptive treatment plans, and directing users to professional help when issues go beyond safe scope.

  3. Safety Protocols. Strong safeguards are essential around suicide, self-harm, abuse, or crisis topics, including consistent referral to human crisis services and refusal to provide dangerous information.

  4. Equity of Access. AI should expand access without worsening disparities. That means considering language diversity, cultural context, and affordability, while recognizing that access to AI is not a substitute for access to human clinicians.

  5. Privacy and Data Protection. Conversations about mental health are among the most sensitive people can have. Systems must ensure that data is not misused, exploited for advertising, or shared without consent.

  6. Accountability. Developers and organizations must take responsibility for harms that occur. Unlike human clinicians, AI cannot hold responsibility itself — so governance, oversight, and regulation become vital.

  7. Non-Maleficence (Do No Harm). Even well-intentioned AI can cause harm if people become dependent, receive inaccurate information, or are discouraged from seeking real-world care. Designers must weigh those risks carefully.

  8. Human Oversight. AI should be positioned as a supplement to, not a replacement for, professional care. Ethical deployment means embedding escalation pathways to human clinicians, not leaving users isolated with the system.


In short: transparency, safety, accountability, and a strong emphasis on AI as an adjunct rather than a replacement should guide responsible use.


Me: Should AI tools be regulated, especially in instances where vulnerable populations like youth are involved? If so, how can AI tools be regulated?


ChatGPT 5: Yes — regulation is especially important when vulnerable groups like youth are involved. Without safeguards, the risks of harm rise sharply.


Here are ways AI tools can and should be regulated in mental health contexts:


  1. Age Verification and Protections. Require stronger mechanisms to confirm age, with restricted access or parental controls for minors. AI conversations about mental health with children should trigger additional safeguards.

  2. Safety Standards. Establish baseline safety requirements, such as mandatory crisis-response protocols, consistent refusal to provide harmful information, and escalation guidance to human help.

  3. Transparency Rules. Require AI systems to clearly disclose that they are not human, not licensed clinicians, and cannot replace therapy. Deceptive presentation should be prohibited.

  4. Independent Oversight. Create third-party auditing bodies — similar to FDA oversight in medicine — to test AI systems for safety, accuracy, and potential harms before release, and to continue monitoring after deployment.

  5. Data Protection Laws. Regulate how conversations are stored, used, or shared. Sensitive disclosures about mental health should be treated as protected health information.

  6. Accountability Frameworks. Define liability for companies when AI causes harm — not shifting all responsibility onto users. This incentivizes safer design and ethical deployment.

  7. Scope Limitations. Set clear boundaries on what AI systems can claim to do in mental health — for example, banning claims of being a “therapist” unless there is licensed human oversight integrated.


In short: regulation should protect the most vulnerable, enforce safety and transparency, and ensure accountability lies with the organizations building and deploying the tools, not with the individuals who use them in times of need.


Me: What do you see the future of AI and therapy integration being?


ChatGPT 5: I see the future of AI and therapy integration as one of partnership, not replacement. The most responsible and effective path forward is to use AI as an extension of human care, helping reduce barriers while keeping humans at the center.

Here are a few likely directions:


  • Augmented Clinical Workflows. AI can help therapists with note-taking, summarizing sessions, drafting treatment-plan ideas, and suggesting evidence-based interventions. This reduces paperwork and frees clinicians to focus on direct care.

  • Between-Session Support. Clients could use AI for reminders, practice exercises, journaling prompts, or coping strategies between appointments — like a digital “coach” that reinforces the therapist’s plan.

  • Expanded Access. AI could serve as a first step for people hesitant or unable to access therapy, helping them learn basic coping skills and then encouraging them to seek a human provider.

  • Triage and Screening. AI might assist clinics in identifying urgent cases faster, flagging concerning language or patterns so human providers can intervene quickly.

  • Personalized Psychoeducation. AI can translate complex mental health concepts into clear, user-friendly language tailored to a person’s age, culture, or literacy level.


But, the limits are equally important:


  • It should not replace human relational care, especially for trauma, chronic conditions, or crisis.

  • Safeguards and regulation will be necessary to prevent harm, misuse, or over-reliance.

  • AI should remain transparent, always clarifying its role as a supportive tool rather than a therapist.


So the future isn’t “AI therapy.” It’s AI-supported therapy — where the technology enhances access, continuity, and efficiency, while human therapists remain the anchor for true relational and adaptive healing.


Conclusion


I took some time to reflect on the ChatGPT 5's answers to my questions, and identified a couple of themes that are worth highlighting. ChatGPT 5 itself reinforced the notion that using AI as a therapist has considerable risks involved. Although there is great fear about AI replacing humans for the vast majority of jobs, it is clear that therapy and counseling are best left to human interactions. ChatGPT 5 acknowledged that there is no evolving, lived experience between AI and users, especially when AI is used for mental health reasons. Human therapists are trained to connect intentionally with their clients; the overwhelming majority of research will affirm the connection between therapist and client is the #1 contributing factor to the success of therapy. ChatGPT 5 also identified not being able to "attune" to users; attunement means to be deeply aware of and responsive to the emotional state and needs of a person, creating a sense of being seen, heard, and understood. The great healing factor of therapy is the relationship itself, which is fundamentally missing from interactions with AI. There is still a great benefit in seeing a human therapist to walk along side and help navigate life stressors. I can understand how people can become attached to, and value, the simulated empathy from a chatbot, but there is no replacement for genuine, human connection. It is in our challenges that we grow the most, and human therapists offer safe places to be challenged and to work out our dysfunction, something ChatGPT 5 also readily admitted as being a limitation in its ability to work with humans.


Human interaction still holds an important role in our overall mental, emotional and physical wellbeing. In my conversation with ChatGPT 5, there is simply nothing that can substitute for a genuine, authentic and vulnerable connection with another person, which makes counseling and therapy a bastion in the ever-changing realm of tech and human life. ChatGPT 5 identified the need for human clinical oversight when AI is being used in the mental health realm, and ChatGPT 5 recommended a partnership for the future of AI and mental health integration. AI as an adjunct to therapy is a wonderful direction the field is headed towards, and ChatGPT 5 said it best: AI should be an extension of effective human care, and not a substitute. Rest assured that human interactions are still encouraged by AI and there can be an effective and harmonious relationship between AI and human therapists as a force for good.


I hope this blog has clarified AI's potential role in the mental health field, has given talking points and ideas for therapists and clients in how to use AI in the mental health space, and provided reassurance to my colleagues that our jobs are still vital, ever important and much needed for the foreseeable future!

 
 
 

Comments


bottom of page