Last updated: May 10, 2026

AB 489 vs FTC AI Disclosure Guidelines: Healthcare AI Identity Rules Compared

The FTC standard says: don't actively deceive patients about AI identity. California AB 489 says: affirmatively disclose AI identity at the start of every patient interaction, before any clinical content is exchanged. These are different obligations. FTC compliance is necessary but insufficient for AB 489. A healthcare AI system that satisfies FTC non-deception standards can still violate AB 489 if it lacks a required upfront disclosure.

The key distinction

FTC: "Don't lie about being human."
AB 489: "Tell the patient you are AI — every time — at the start — before saying anything clinical."

The FTC standard is violated when you actively deceive. AB 489 is violated when you fail to affirmatively disclose, even in the absence of any deception.

Side-by-Side Comparison

DimensionFTC Guidelines (Federal)AB 489 (California)
StandardProhibit deceptive AI identity claims (reactive)Require affirmative AI identity disclosure (proactive)
TriggerTriggered when AI actively misleads users about its identityTriggered at the start of every patient interaction, regardless of whether any deception is present
Timing of disclosureNo specific timing requirement — disclosure must be accessible somewhereDisclosure must appear at the start of each interaction, before any clinical content is exchanged
Clinical camouflage prohibitionImplied under deception standard but not explicitly namedExplicitly prohibited: white coats, "Dr." names, stethoscopes, clinical imagery on AI avatars
Applies toConsumer-facing AI broadly (not healthcare-specific)Patient-facing AI in healthcare contexts, anywhere in California
Legal basisFTC Act Section 5 (unfair or deceptive acts)California AB 489 (enacted 2024, effective January 1, 2026)
Enforced byFTC (federal agency)Medical Board of California; California Attorney General
PenaltyCivil penalties; injunctive relief; restitutionMedical Board disciplinary action for physicians; potential professional license consequences
FTC compliance satisfies AB 489?NoNo

The FTC Non-Deception Standard for AI

The FTC's authority over AI identity practices comes from Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices affecting commerce. The FTC has applied this to AI in guidance published in 2023, stating that:

  • AI tools that falsely claim to be human violate Section 5
  • Design patterns that obscure AI identity to manipulate users (dark patterns) may be deceptive
  • Testimonials and reviews that appear human-generated but are AI-generated without disclosure may constitute deception

The FTC standard is anchored in deception: the company must not actively mislead users about whether they are interacting with a human or an AI. If users clearly understand they are interacting with AI, FTC compliance is generally satisfied.

What AB 489 Requires Beyond Non-Deception

AB 489 does not merely require non-deception — it requires an affirmative, prominent disclosure before any clinical content is exchanged. The distinction matters in practice:

  • A healthcare AI app whose users know from the app store listing that it's AI-powered still violates AB 489 if it doesn't display a prominent disclosure at the start of each individual patient interaction
  • A chatbot embedded in a hospital's patient portal that is labeled "AI Assistant" in the interface still violates AB 489 if the disclosure doesn't appear at the opening of each conversation
  • An AI avatar with a clearly robotic appearance still requires an explicit disclosure that it is not a licensed healthcare professional — appearance alone does not satisfy AB 489

The requirement is per-interaction, not per-product. The same disclosure must appear at the start of every conversation, every session — a disclosure shown once during onboarding does not satisfy the law.

Clinical Camouflage: AB 489's Explicit Prohibition

AB 489 introduces a specific concept — clinical camouflage — that has no equivalent in FTC guidance. Clinical camouflage refers to design choices that make an AI system appear to be a licensed healthcare professional:

  • AI avatars wearing white coats, scrubs, or other clinical attire
  • AI systems using names that include clinical titles: "Dr. Alex," "Nurse Sarah," "Physician AI"
  • AI interfaces displaying stethoscopes, hospital logos, or other clinical imagery in proximity to the AI interaction
  • AI systems that use first-person language implying clinical professional identity without disclosure

The Medical Board of California has stated in published guidance that clinical camouflage design, even if accompanied by a small disclaimer elsewhere on the screen, may still violate AB 489 if the camouflage is more prominent than the disclosure. The disclosure must be prominent — covering at least 20% of the interaction screen — or placed where a patient cannot miss it before engaging.

FTC-compliant but AB 489-non-compliant: the common pattern

A mental health app with a clearly AI-powered chatbot (users know from app store disclosure) but no disclosure at the start of each session. FTC: likely compliant (no active deception). AB 489: non-compliant (no per-interaction disclosure in the healthcare context).

Which Standard Applies to Your Product

Your AI ProductFTC Standard Applies?AB 489 Applies?
Healthcare chatbot that answers patient questionsYes — consumer contextYes — patient-facing healthcare AI
AI used only by clinicians (not patient-facing)Yes — general commercial contextNo — not patient-facing
Telehealth AI that collects symptoms before physician visitYesYes — patient interaction in healthcare context
General wellness app with no clinical contentYes — consumer productMaybe not — depends on whether health advice rises to "clinical" level
AI appointment scheduling botYesProbably not — scheduling is not clinical content

Free tool: Generate your AB 489 + AB 3030 disclosure

Use our free Disclosure Generator to create a compliant AB 489 identity disclosure for your patient-facing AI. Works for chatbots, virtual assistants, telehealth bots, and automated messaging. No signup required.

Open Disclosure Generator →

Frequently Asked Questions

Frequently Asked Questions

Does FTC compliance satisfy AB 489?
No. The FTC standard is reactive — it prohibits AI systems from actively deceiving users about their identity as a human. AB 489 is proactive — it requires an affirmative, prominent disclosure at the start of every patient interaction stating that the system is not a licensed healthcare professional, even if the patient is not being deceived and would not otherwise ask. A compliant FTC disclosure might be a label in the app interface; a compliant AB 489 disclosure must appear before any clinical content is exchanged in the interaction itself.
What is the FTC AI disclosure standard for healthcare?
The FTC's enforcement authority comes from Section 5 of the FTC Act, which prohibits unfair or deceptive acts or practices. In AI contexts, the FTC has issued guidance (2023) stating that AI systems must not falsely claim to be human, and that design patterns that obscure AI identity — so-called 'dark patterns' — may constitute deceptive practices. The FTC standard is a floor: don't actively deceive. AB 489 requires more: proactively disclose.
Does AB 489 apply to AI used by businesses, not consumers?
AB 489 applies to any AI system that communicates directly with patients — meaning any AI used in a patient-facing healthcare context, regardless of whether the platform is marketed as a B2B healthcare technology. A hospital-purchased AI triage bot that talks to patients triggers AB 489. The obligation runs with the patient interaction, not with the business relationship between the vendor and the hospital.
What is "clinical camouflage" under AB 489?
AB 489 specifically prohibits clinical camouflage — design choices that make an AI system appear to be a licensed healthcare professional. This includes: AI avatars wearing white coats or scrubs, using clinical titles like "Dr." or "Nurse," displaying stethoscopes or clinical imagery, or using a first name that implies human clinical identity (e.g., "Hi, I'm Dr. Alex"). Clinical camouflage is prohibited regardless of whether there is also a disclosure elsewhere on the page.
Can a single prominent "AI" label satisfy AB 489?
Not by itself. A badge or label that says "AI" or "Powered by AI" in the interface does not satisfy AB 489 if it is not: (1) prominently displayed at the start of the patient interaction, (2) before any clinical content is exchanged, and (3) explicit that the system is not a licensed healthcare professional. The label must be part of the interaction itself — not a static interface element the patient may have noticed before starting. The Medical Board of California has indicated in guidance that small "AI" indicators do not meet the "prominent" standard.
Does the FTC Act apply in California alongside AB 489?
Yes. The FTC Act applies federally and California AB 489 applies at the state level — they are not alternatives, they both apply simultaneously. A healthcare AI company in California must satisfy both: no deceptive AI identity practices (FTC) AND affirmative disclosure at the start of every patient interaction (AB 489). In practice, AB 489 compliance generally satisfies FTC standards because affirmative disclosure is a stronger measure than non-deception alone.

Related Comparisons

Related Articles

More on the same topics — California AI laws, healthcare compliance, and the rules behind them.

Is Your AI Compliant?

Don't guess. Use our free calculator to check your AB 489 & AB 3030 status in minutes.

Start Free Compliance Check

2026 Legislative Tracker

Live status of California AI regulations.

SB 53In Force

Transparency in Frontier AI

Effective: Jan 1, 2026
AB 2013In Force

Training Data Transparency

Effective: Jan 1, 2026
SB 942Upcoming

AI Watermarking (per AB 853)

Effective: Aug 2, 2026
AB 3030In Force

Healthcare AI Disclosure

Effective: Jan 1, 2025
SB 243In Force

Companion Chatbot Safety

Effective: Jan 1, 2026
AB 316In Force

Autonomous AI Defense

Effective: Jan 1, 2026
SB 1047Vetoed

Safe & Secure Innovation

Effective: N/A