Radiology AI Compliance in California (2026)
Radiology is the vanguard of medical AI, but with great power comes strict regulation. Under California's AB 489, diagnostic algorithms are facing new scrutiny regarding transparency, oversight, and liability.
The Regulatory Landscape: Not Just a Tool, A Partner
For years, Computer-Aided Detection (CADe) systems were passive assistants. Today's Deep Learning models are active diagnosticians. California law has caught up to this reality. The core principle of 2026 regulation is that AI cannot practice medicine unsupervised.
AB 489 and Clinical Decision Support (CDS)
Under Assembly Bill 489, radiology AI tools often fall under the definition of Clinical Decision Support. However, to maintain this classification (and avoid being regulated as a highly restricted autonomous medical device), the software must enable the "Human-in-the-Loop."
The "Glass Box" Requirement: It is no longer sufficient for an AI to output a probability score (e.g., "Malignancy: 98%"). The system must provide visual evidence—heatmaps, bounding boxes, or segmentation masks—that allows the radiologist to independently verify why the AI made that determination. If the radiologist cannot validate the AI's reasoning, relying on it may constitute professional negligence.
The Human-in-the-Loop (HITL) Workflow
Compliance is not just about software; it's about workflow. To be compliant in California, your radiology practice must demonstrate a valid HITL process:
- Independent Review: The radiologist should ideally form an initial impression or view the raw imaging before toggling the AI overlay. This prevents "automation bias" where the doctor is conditioned to agree with the computer.
- Documentation of Disagreement: When the radiologist disagrees with the AI (e.g., AI marks a nodule that the doctor deems an artifact), this must be technically logged. This "negative feedback loop" is crucial for demonstrating that the human is in charge.
- Final Sign-Off: The final report must be explicitly signed by the human provider. An AI-generated report sent directly to a patient portal or referring physician without a human signature is a violation of the Medical Practice Act.
Liability Warning
If an AI misses a lesion and the radiologist also misses it, standard malpractice applies. However, if the AI detected the lesion, but the radiologist dismissed it due to "alert fatigue," the liability profile changes. Plaintiffs lawyers are now subpoenaing AI audit logs to find these discrepancies.
2026 Compliance Checklist for Radiology
- ✓Verify "Glass Box" Features: Ensure your AI viewer allows you to see the heatmap/bounding box, not just a probability score.
- ✓Audit Logs: Confirm that the software logs every time a radiologist accepts or rejects an AI suggestion.
- ✓Update Consent Forms: If AI is used for triage or primary screening, update patient consent forms to disclose this.
- ✓Check Vendor Compliance: Ask your AI vendor for their AB 2013 training data disclosure.
Actionable Steps for Radiology Depts
- Audit Your Vendor: Does your AI software log every interaction? Can you see "AI showed X, Doctor clicked Y"?
- Update Liability Insurance: Confirm your policy covers "AI-augmented" services.
- Patient Disclosure: While back-end radiology AI is often exempt from direct patient disclosure (unlike chatbots), if the AI output is the sole basis for a referral, informed consent best practices suggest disclosing the use of advanced algorithmic analysis.