AB 489 vs FTC AI Disclosure Guidelines: Healthcare AI Identity Rules Compared
The FTC standard says: don't actively deceive patients about AI identity. California AB 489 says: affirmatively disclose AI identity at the start of every patient interaction, before any clinical content is exchanged. These are different obligations. FTC compliance is necessary but insufficient for AB 489. A healthcare AI system that satisfies FTC non-deception standards can still violate AB 489 if it lacks a required upfront disclosure.
The key distinction
FTC: "Don't lie about being human."
AB 489: "Tell the patient you are AI — every time — at the start — before saying anything clinical."
The FTC standard is violated when you actively deceive. AB 489 is violated when you fail to affirmatively disclose, even in the absence of any deception.
Side-by-Side Comparison
| Dimension | FTC Guidelines (Federal) | AB 489 (California) |
|---|---|---|
| Standard | Prohibit deceptive AI identity claims (reactive) | Require affirmative AI identity disclosure (proactive) |
| Trigger | Triggered when AI actively misleads users about its identity | Triggered at the start of every patient interaction, regardless of whether any deception is present |
| Timing of disclosure | No specific timing requirement — disclosure must be accessible somewhere | Disclosure must appear at the start of each interaction, before any clinical content is exchanged |
| Clinical camouflage prohibition | Implied under deception standard but not explicitly named | Explicitly prohibited: white coats, "Dr." names, stethoscopes, clinical imagery on AI avatars |
| Applies to | Consumer-facing AI broadly (not healthcare-specific) | Patient-facing AI in healthcare contexts, anywhere in California |
| Legal basis | FTC Act Section 5 (unfair or deceptive acts) | California AB 489 (enacted 2024, effective January 1, 2026) |
| Enforced by | FTC (federal agency) | Medical Board of California; California Attorney General |
| Penalty | Civil penalties; injunctive relief; restitution | Medical Board disciplinary action for physicians; potential professional license consequences |
| FTC compliance satisfies AB 489? | No | No |
The FTC Non-Deception Standard for AI
The FTC's authority over AI identity practices comes from Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices affecting commerce. The FTC has applied this to AI in guidance published in 2023, stating that:
- AI tools that falsely claim to be human violate Section 5
- Design patterns that obscure AI identity to manipulate users (dark patterns) may be deceptive
- Testimonials and reviews that appear human-generated but are AI-generated without disclosure may constitute deception
The FTC standard is anchored in deception: the company must not actively mislead users about whether they are interacting with a human or an AI. If users clearly understand they are interacting with AI, FTC compliance is generally satisfied.
What AB 489 Requires Beyond Non-Deception
AB 489 does not merely require non-deception — it requires an affirmative, prominent disclosure before any clinical content is exchanged. The distinction matters in practice:
- A healthcare AI app whose users know from the app store listing that it's AI-powered still violates AB 489 if it doesn't display a prominent disclosure at the start of each individual patient interaction
- A chatbot embedded in a hospital's patient portal that is labeled "AI Assistant" in the interface still violates AB 489 if the disclosure doesn't appear at the opening of each conversation
- An AI avatar with a clearly robotic appearance still requires an explicit disclosure that it is not a licensed healthcare professional — appearance alone does not satisfy AB 489
The requirement is per-interaction, not per-product. The same disclosure must appear at the start of every conversation, every session — a disclosure shown once during onboarding does not satisfy the law.
Clinical Camouflage: AB 489's Explicit Prohibition
AB 489 introduces a specific concept — clinical camouflage — that has no equivalent in FTC guidance. Clinical camouflage refers to design choices that make an AI system appear to be a licensed healthcare professional:
- AI avatars wearing white coats, scrubs, or other clinical attire
- AI systems using names that include clinical titles: "Dr. Alex," "Nurse Sarah," "Physician AI"
- AI interfaces displaying stethoscopes, hospital logos, or other clinical imagery in proximity to the AI interaction
- AI systems that use first-person language implying clinical professional identity without disclosure
The Medical Board of California has stated in published guidance that clinical camouflage design, even if accompanied by a small disclaimer elsewhere on the screen, may still violate AB 489 if the camouflage is more prominent than the disclosure. The disclosure must be prominent — covering at least 20% of the interaction screen — or placed where a patient cannot miss it before engaging.
FTC-compliant but AB 489-non-compliant: the common pattern
A mental health app with a clearly AI-powered chatbot (users know from app store disclosure) but no disclosure at the start of each session. FTC: likely compliant (no active deception). AB 489: non-compliant (no per-interaction disclosure in the healthcare context).
Which Standard Applies to Your Product
| Your AI Product | FTC Standard Applies? | AB 489 Applies? |
|---|---|---|
| Healthcare chatbot that answers patient questions | Yes — consumer context | Yes — patient-facing healthcare AI |
| AI used only by clinicians (not patient-facing) | Yes — general commercial context | No — not patient-facing |
| Telehealth AI that collects symptoms before physician visit | Yes | Yes — patient interaction in healthcare context |
| General wellness app with no clinical content | Yes — consumer product | Maybe not — depends on whether health advice rises to "clinical" level |
| AI appointment scheduling bot | Yes | Probably not — scheduling is not clinical content |
Free tool: Generate your AB 489 + AB 3030 disclosure
Use our free Disclosure Generator to create a compliant AB 489 identity disclosure for your patient-facing AI. Works for chatbots, virtual assistants, telehealth bots, and automated messaging. No signup required.
Open Disclosure Generator →