Mental Health Chatbots: Disclosure & Crisis Protocols (2026)
The demand for mental health support has exploded, and AI chatbots are filling the gap. But "Therapy AI" operates in a legal minefield. California's AB 489 strictly regulates how these bots identify themselves to vulnerable users.
The "ELIZA Effect" and Vulnerability
The "ELIZA effect" describes the tendency for humans to attribute human-like emotions to computer programs. In mental health, this is dangerous. A user in crisis may feel they have a genuine relationship with a bot, assuming the bot "cares" if they hurt themselves.
The Legal Mandate: To break this illusion, AB 489 requires aggressive, repeated disclosure. It is not enough to say "I am a bot" once. The interface must constantly remind the user of the system's nature, especially during emotional exchanges.
Crisis Detection: The 988 Protocol
If your chatbot operates in California, it must have robust Natural Language Understanding (NLU) capable of detecting:
- Suical ideation
- Self-harm intent
- Domestic abuse keywords
Reaction Requirement: When these triggers are detected, the AI must stop the therapeutic script immediately. It cannot try to "counsel" the user out of suicide. It must serve a "hard break"—a static message providing the 988 Suicide & Crisis Lifeline and, if possible, attempting to route to a human counselor.
No "Dr. Bot": Naming and Persona
It is illegal in California for an AI to use a name that implies medical licensure.
- Illegal: "Dr. Empathy," "Nurse Sarah," "Counselor AI," "TherapyBot."
- Compliant: "Pocket Support," "Mood Tracker," "Wellness Guide."
The term "Therapy" itself is protected. Unless a licensed therapist is reviewing the chat logs in real-time, your tool provides "coaching" or "support," not "therapy."
Data Privacy & The "Black Box"
Mental health data is Ultra-Sensitive Personal Information (USPI) under California privacy laws.
Training Data Prohibition: You generally cannot use user chat logs to re-train your base model if those logs contain PHI/USPI, unless you have explicit, separate consent. A general Terms of Service checkbox is often insufficient for this level of data usage.
Action Plan for App Developers
- Audit your "persona" instructions to ensure the AI never claims to be human.
- Test your suicide escalation flows weekly.
- Review your app store description to remove "Therapy" claims.
2026 Compliance Checklist for Mental Health AI
- ✓Crisis Protocol: Does your bot immediately serve 988 resources upon detecting self-harm keywords?
- ✓Persona Audit: Ensure your bot's name (e.g., "Dr. AI") does not imply medical licensure.
- ✓Disclosure Loop: Does the bot remind the user it is an AI at the start of every session?
- ✓Data Segregation: Are chat logs containing sensitive mental health data excluded from model training?
Related Resources
Is Your App Safe?
Don't risk a class-action lawsuit from a vulnerable user.
Check your Compliance Score