InnovationsCOMMENTARY

Medical malpractice liability in large language model artificial intelligence: legal review and policy recommendations

David O. Shumway, DO; and Hayes J. Hartman, JD
Notes and Affiliations
Notes and Affiliations

Received: October 11, 2023

Accepted: January 3, 2024

Published: January 31, 2024

  • David O. Shumway, DO, 

    Keesler Medical Center, Keesler Air Force Base, Biloxi, MS, USA

  • Hayes J. Hartman, JD, 

    Attorney, Mountain Home, ID, USA

J Osteopath Med; 124(7): 287-290
Abstract

The emergence of generative large language model (LLM) artificial intelligence (AI) represents one of the most profound developments in healthcare in decades, with the potential to create revolutionary and seismic changes in the practice of medicine as we know it. However, significant concerns have arisen over questions of liability for bad outcomes associated with LLM AI-influenced medical decision making. Although the authors were not able to identify a case in the United States that has been adjudicated on medical malpractice in the context of LLM AI at this time, sufficient precedent exists to interpret how analogous situations might be applied to these cases when they inevitably come to trial in the future. This commentary will discuss areas of potential legal vulnerability for clinicians utilizing LLM AI through review of past case law pertaining to third-party medical guidance and review the patchwork of current regulations relating to medical malpractice liability in AI. Finally, we will propose proactive policy recommendations including creating an enforcement duty at the US Food and Drug Administration (FDA) to require algorithmic transparency, recommend reliance on peer-reviewed data and rigorous validation testing when LLMs are utilized in clinical settings, and encourage tort reform to share liability between physicians and LLM developers.

Read Full Article