California Enacts Law Mandating Oversight of AI in Medical Necessity Decisions
New legislation ensures healthcare professionals supervise AI-assisted determinations, safeguarding patient rights and care standards.

February 12, 2025 – California has introduced a new law requiring licensed health care professionals to supervise artificial intelligence (AI) or other automated tools used in medical necessity decisions. Under the new regulations, decisions about approving, modifying, or denying medical care must be reviewed by a qualified physician or clinical professional who considers the patient’s medical history and records. The law, effective January 1, 2025, prohibits reliance solely on AI for these critical decisions.
The legislation, known as the “Physicians Make Decisions Act” or Senate Bill 1120, applies to health care plans regulated by the Department of Managed Health Care or the Insurance Commissioner. This move addresses growing concerns from physician organizations about the rising denial rates when insurers use AI to analyze claims and determine treatment authorization. While the law does not ban AI technology, it imposes strict conditions on its use.
The provisions target entities performing utilization review or management functions, whether through health plans or contracted organizations. The law specifies that any determination involving AI must adhere to several key safeguards:
- AI-assisted decisions must be based on an enrollee’s clinical history, individual circumstances, and medical records, rather than group datasets.
- AI cannot replace the judgment of licensed health care professionals.
- The technology must be applied fairly, avoiding discrimination prohibited under state and federal laws.
- Organizations using AI must undergo periodic audits and compliance reviews to ensure adherence to these requirements.
Additionally, the technology’s performance must be regularly evaluated to maintain accuracy and reliability. Any consumer data analyzed by AI must not be misused or extended beyond its intended purpose, in line with privacy laws. Importantly, the law mandates that AI tools cannot directly or indirectly harm enrollees.
On January 13, 2025, the California Office of the Attorney General (OAG) issued a legal advisory highlighting the law’s implications for health care providers, insurers, and AI developers. The advisory emphasized the potential risks of using AI in health care, including harm to patients, systemic bias, and data misuse. To address these concerns, the OAG urged thorough testing, validation, and auditing of AI systems to ensure their use aligns with ethical and legal standards.
Transparency was another key theme in the OAG’s guidance. Organizations utilizing AI in health care must clearly disclose to consumers how their personal information is being used—especially when training AI systems—and provide clarity on how such tools impact patient care decisions.
Supporters of the legislation argue it strikes a balance between leveraging technological advancements and protecting consumer rights. While AI has the potential to streamline administrative processes in health care, this law ensures that human oversight remains central to medical decision-making.
As AI continues to play an increasing role in health care, California’s approach may serve as a model for other states. By prioritizing patient safety and fairness, the law highlights the importance of integrating technology responsibly into medical practices.
The VBP Blog is a comprehensive resource for all things related to value-based payments. Up-to-date news, informative webinars, and relevant blogs in the VBP sphere to help your organization find success.
Get the VBP Blog For FREE
Are you looking for insider access to our expert Value-Based Payment insights?
Subscribe now to get trending topics and tips sent directly to your inbox.