California AG's 2025 Warning: Is Your AI Discriminating Against Patients?
Attorney General Bonta is watching. How to audit your AI for algorithmic bias. 🛡️
The Advisory
The California Attorney General has issued a clear warning to the healthcare industry: AI bias is not just a technical glitch; it is a civil rights violation. Using an algorithm that results in disparate impact—treating patients differently based on race, gender, or other protected characteristics—violates state anti-discrimination laws.
The Audit
You cannot wait for a complaint. You must proactively test your algorithms.
- Input Audit: Are you using proxies for race (like zip code) in your model?
- Outcome Audit: Does your model deny care to Black patients at a higher rate than White patients with similar clinical profiles?
- Performance Audit: Is your skin cancer detection model less accurate on darker skin tones?
Documentation
If you are investigated, the first thing the AG will ask for is your bias testing documentation. If you don't have it, you are already losing. Keep detailed records of your testing methodology and the steps you took to mitigate any bias found.
Conclusion
Fairness is now a legal requirement. Treat algorithmic bias with the same seriousness as you treat data security.
Frequently Asked Questions (FAQ)
Can I fix bias by just removing race from the data?
Usually no. AI is good at finding proxies (like zip code or language) that correlate with race. You often need to keep race data (for testing) to ensure the model is actually fair, a technique called "fairness through awareness."
What if the bias comes from the medical data itself?
This is common (systemic bias). However, you are responsible for ensuring your tool does not perpetuate or amplify that bias. You may need to re-weight your training data.
Are there fines?
Yes. Violations of the Unruh Civil Rights Act can lead to statutory damages and attorney's fees. AG actions can lead to massive settlements and injunctive relief (shutting down your algorithm).