Medical AI Compliance Glossary

Essential terms and definitions for navigating California's medical AI regulations.

Business

Risk Management

Evaluation and minimization of risks.

Compliance

Audit Trail

A security-relevant chronological record providing documentary evidence of the sequence of activities that have affected a specific operation, procedure, or event.

Clinical Camouflage

The deceptive design practice where an AI system is anthropomorphized to mimic a licensed healthcare professional (e.g., wearing white coats).

Disclosure Requirement

The legal mandate that AI systems must inform users they are interacting with artificial intelligence at the start of every interaction.

Human-in-the-Loop (HITL)

A system design requiring human review of AI output before it reaches the patient.

Misrepresentation

Giving a false account of the nature of something (e.g., AI appearing human).

Persistent Disclosure

Requirement that AI notices remain visible throughout an interaction.

Post-Nominal Letters

Credentials like M.D. or R.N. after a name.

White Coat Rule

Prohibition on AI avatars wearing clinical attire.

Design

UI Compliance

Designing interfaces that meet legal standards (e.g., font size).

Ethics

Transparency

Principle that AI should be understandable and disclosed.

Healthcare Delivery

Telehealth

Remote delivery of healthcare.

Legislation

AB 2013

California Assembly Bill 2013 requires developers of generative AI systems to provide high-level summaries of the datasets used to train their models. This includes information on the categories of data, whether it contains PII, and the time period it covers.

AB 3030

California Assembly Bill 3030 is a focused regulation targeting the use of Generative AI (GenAI) in healthcare communications. It mandates that if a GenAI tool is used to communicate clinical information to a patient, a human healthcare provider must review and approve that communication before it is sent.

AB 489

California Assembly Bill 489, formally known as the 'Artificial Intelligence Transparency Act for Healthcare,' is a state law enacted to prevent the deceptive use of AI in medical settings. It mandates that any automated system interacting with a patient must clearly and conspicuously disclose strictly that it is not a human.

HIPAA

Federal law protecting sensitive patient health information (PHI).

Medical

Clinical Validation

The process of proving that an AI model works as intended in a real-world clinical setting.

Informed Consent

Permission obtained from a patient with full knowledge of the risks.

Patient Rights

Right to Know

Patient's right to be informed when interacting with AI.

Privacy

PHI

Protected Health Information under HIPAA.

Regulatory Body

Enforcement Agency

Government bodies like the Medical Board of California empowered to uphold AI laws.

Medical Board of California

State agency licensing physicians and enforcing medical laws.

Osteopathic Medical Board

Regulatory body for DOs in California.

SEO

E-E-A-T

Experience, Expertise, Authoritativeness, and Trustworthiness. Google's quality framework.

YMYL

Your Money or Your Life content classification.

Technology

Algorithmic Bias

Systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.

Artificial Intelligence (AI)

Any computational system that performs tasks indicative of human intelligence, broadly defined in legislation to distinguish valid tools from misleading automation.

Clinical Decision Support (CDS)

Health IT tools providing clinical knowledge and patient-related information to enhance health care.

Data Provenance

Documentation of the origin and history of the data used to train an AI model.

EHR Integration

The connection between AI systems and electronic health record software.

Generative AI (GenAI)

AI models that can create new content (text, images) rather than just analyzing data.

Hallucination

When an AI produces confident but factually incorrect information.

Large Language Model (LLM)

AI trained on vast text data to generate human-like language (e.g., GPT-4).

Model Training Data

The dataset used to teach an AI model.

Patient-Facing AI

AI interacting directly with patients (chatbots, voice assistants).

SaMD

Software performing medical purposes without hardware.

Symptom Triage AI

AI assessing symptoms to recommend care urgency.

Synthetic Content

Content generated by AI mimicking human creation.

Virtual Nursing Assistant

AI help for administrative/triage tasks.

Voice Agent

AI interacting via spoken language.

Ready to Check Your Compliance?

Use our free Compliance Calculator to assess your AI system against these regulations.

Start Compliance Check →