
- Practice Management
Potential Liability for Physicians Using Artificial Intelligence
Recommended key strategies for physicians and medical practices to effectively manage the potential liability risk when using AI.
Based on the drivers of medical liability claims and MICA’s experience defending physicians against the allegations expected in medical malpractice claims involving artificial intelligence (AI) technologies, the Risk Team recommends key strategies to effectively manage the potential liability risk for physicians using AI.
Actual negligence does not drive most medical malpractice claims. The patient’s perception that the physician did something wrong, and their resulting distrust, are what send patients to a lawyer’s office or the submit-a-complaint section of a licensing board’s website. Key strategies include obtaining patients’ informed consent to the use of AI, complying with AI-based laws, and demonstrating continual involvement in the effects of AI on their care and treatment.
Informed Consent to Medical AI
Trust is fundamental to therapeutic relationships and may keep patients in the exam room and out of the court room. The Deloitte Center for Health Solutions’ 20231 survey results provide some guidance for introducing patients to AI-based devices, equipment, and support systems, and maintaining trust.
-
- The 2023 Deloitte survey results showed that patients support the use of AI as a clinical resource, for example for possible treatments of their conditions, but do not want AI to decide on their diagnosis or planned treatment.2
-
- Deloitte’s researchers also found that 80% of patients believe it is important or very important their physicians tell them when the physicians are using AI.3
An informed consent approach may prevent patients’ allegations of medical liability for not disclosing or obtaining the patients’ permission to the use of AI technology.
Most state laws recognize informed consent to include disclosing and documenting the risks and benefits of and alternatives to proposed treatments, surgeries, and procedures, and the consequences of refusing them. To prevent AI-related medical malpractice claims
-
- Investigate, understand, and prepare to explain in layman’s terms how the medical AI develops and corroborates the accuracy, security, and equity4 of clinical information and recommendations.
-
- Bolster patient trust during the dialogue by addressing the medical AI’s role and that it will not change or replace the physician-patient connection.
-
- Throughout the treatment relationship, remind patients about the level of medical AI’s involvement and that the relationship is between the patient and physician, not the AI technology.
Reminders about the relationship and documentation about the reminders are a way physicians can demonstrate their continual involvement in the effects of AI on care and treatment.
State and Federal Regulation of AI Technology and Transparency
Disclosure of the risks and benefits of and alternatives to proposed treatments, surgeries, and procedures supported or enhanced by medical AI, and the consequences of refusing them, is not just about informed consent. It involves compliance with state and/or federal consumer protection laws.
-
- Utah enacted a law requiring physicians, who are included in the law’s definition of “regulated occupations,” to “prominently” disclose the use of AI to patients. The law does not state what information must be included in the disclosure or how often physicians must disclose the use of AI. 5
-
- The Colorado legislature passed an AI law, effective in 2026, compelling users of medical AI that is a substantial factor in “healthcare services” decisions to do 2 things:
-
-
- notify patients about the use of it, and
-
-
-
- publicly summarize the AI technology and how the physician will manage the known and foreseeable risks of discrimination in the algorithms.6
-
-
- The US Department of Health and Human Services’ Office of Civil Rights published a final rule7 that details the Affordable Care Act’s prohibition of discrimination by Medicaid, Medicare, and other program providers based on race, color, national origin, sex, age, or disability. This could include unintentional biases in AI algorithms used by participating providers.
Disclosing and documenting AI-based technology must comply with these laws, further straining most physicians’ internal compliance programs. Other states and regulatory agencies are expected to follow.
Showing Physician Control of AI-based Transcription
Ambient AI, working behind the scenes with minimal physician involvement, in the form of speech recognition technology (SRT) is the AI technology most physicians report using. SRT often transcribes typographical errors and sometimes even gibberish into the medical record.
Examples of SRT ErrorsExample 1
Example 2
Example 3
Quick Safety Newsletter Issue 12: Speech recognition technology translates to patient risk (Updated May 2022). The Joint Commission. Quick Safety 12: Speech recognition technology translates to patient risk (Updated May 2022) | The Joint Commission. Last access September 02, 2024. |
A disclaimer often follows the entry and explains that SRT may lead to typos and the substitution of wrong words, warns readers to carefully read and recognize where substitutions are out of context, and asks readers to call with any questions. Unfortunately, physicians may be authenticating and finalizing the entries, errors and all, even in the face of medical practice and policies and procedures or payor contracts mandating review and correction.
These disclaimers have the same effect as the old “dictated but not read” and “transcription not approved” disclaimers, by making physicians appear uninvolved or disengaged. One way to demonstrate continual involvement to patients, jurors, and medical board investigators is allowing the SRT or EHR to automatically include the review and correction process in the entry. Another way to show continual involvement may be the EHR’s metadata behind the process.
AI technology is attractive to physicians interested in its capabilities for patient care and lightening their administrative load. MICA’s key strategies for managing the potential liability for physicians using AI involve informed consent, complying with AI laws, and showing that the physician has not turned the patient’s care over to a machine
[1] Dhar, A., Fera, B., & Korenda, L. (2023). Can GenAI help make health care affordable? Consumers think so. Deloitte Center for Health Solutions. Can GenAI Help Make Health Care Affordable? Consumers Think So | Deloitte US. Accessed September 02, 2024.
[2] Id.
[3] Id.
[4] See Steffens, D. (2023) Unveiling the Hidden Biases in Medical AI: Paving the Way for Fairer and More Accurate Imaging Diagnoses - Neuroscience News. Neuroscience News. Accessed September 02, 2024. See also Drukker, K. et al. (2023). Toward fairness in artificial intelligence for medical image analysis: identification and mitigation of potential biases in the roadmap from data collection to model deployment (spiedigitallibrary.org). J. Med. Imaging, 10(6). https://doi.org/10.1117/1.JMI.10.6.061104. Accessed September 02, 2024.
[5] Read the new Utah Artificial Intelligence Policy Act (2024) at SB0149 (utah.gov).
[6] A summary of the Colorado Artificial Intelligence Act is available at Consumer Protections for Artificial Intelligence | Colorado General Assembly. Accessed September 02, 2024.
[7] See US Department of Health and Human Services’ Office of Civil Rights’ Strengthening Nondiscrimination Protections and Advancing Civil Rights in Health Care through Section 1557 of the Affordable Care Act: Fact Sheet | HHS.gov. Accessed September 02, 2024