When AI Becomes the Standard

DRI
Contact

DRI

[author: Elliot Moormann]*

Technology that was once only a thing of fiction, used by Captain Kirk and his crew on the USS Enterprise to scan and medically diagnose crew members in the popular sci-fi television series Star Trek, can now be found implemented in hospitals around the United States as the healthcare field races to keep up with advancements in artificial intelligence (AI) and machine learning (ML). As of January 2023, the FDA has approved over 520 AI and ML algorithms for medical use. (https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices utm_medium=email&utm_source=govdelivery.) However, “there is essentially no case law on liability involving medical AI.” W. Nicholson Price et al., Potential Liability for Physicians Using Artificial Intelligence, 322 JAMA 1765, 1765 (2019)(noting, “there is essentially no case law on liability involving medical AI.”). This article will discuss how medical professionals can better protect themselves from liability while using AI and ML programs today in a clinical setting, and will discuss how AI/ML programs may change the legal implications of care patients receive in the future.

Artificial Intelligence (AI) and Machine Learning (ML) - in the Medical Field

Artificial intelligence (AI) is a comprehensive paradigm of computer intelligence, encompassing sophisticated programs capable of sensing, reasoning, acting, and adapting. National Institutes of Health. NIH workshop: harnessing artificial intelligence and machine learning to advance biomedical research. July 23, 2018. Machine learning (ML), a distinct subset of AI, harnesses algorithms that continually enhance their performance through exposure to larger volumes of data over time. Id. Within the medical domain, numerous approved devices are specifically designed to augment the detection of potential pathologies originating from image-based sources, such as radiographs, electrocardiograms, and biopsies. See Abràmoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digital Med. 2018;1(1):39. Notably, medical AI plays a pivotal role in supporting clinical decision-making, ranging from drug and dosage recommendations to the nuanced interpretation of radiological images. See also, Mills TT, Food and Drug Administration 510(k) clearance of Zebra Medical Vision Health PNX device (letter). May 6, 2019.

It is of considerable interest that, among the 520 AI medical algorithms cleared for use in the United States, an overwhelming majority—over 390—focus on medical imaging. https://healthexec.com/topics/artificial-intelligence/fda-has-now-cleared-more-500-healthcare-ai-algorithms. The emergence of AI in medical imaging has been underpinned by a compelling impetus to attain higher efficacy and efficiency in clinical care. Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L. H., & Aerts, H. J. W. L. (2018). Artificial intelligence in radiology. Nature reviews. Cancer, 18(8), 500–510. https://doi.org/10.1038/s41568-018-0016-5. This pursuit gains significance as radiological imaging data continues to burgeon at an exponential pace, surpassing the available cadre of trained readers. Boland GWL, Guimaraes AS & Mueller PR The radiologist’s conundrum: benefits and costs of increasing CT capacity and utilization. Eur. Radiol 19, 9–12 (2009). Furthermore, the decline in imaging reimbursements has necessitated healthcare providers to respond with heightened productivity, and AI has emerged as a vital solution to address this exigency. Id. Indeed, scholarly studies have reported instances where an average radiologist must interpret one image every 3–4 seconds during an 8-hour workday to meet the demands of their workload. McDonald RJ et al. The effects of changes in utilization and technological advancements of cross-sectional imaging on radiologist workload. Acad. Radiol 22, 1191–1198 (2015).

In this context, AI and ML programs have showcased remarkable strides in image-recognition tasks, owing to the application of sophisticated methodologies such as convolutional neural networks and variational autoencoders. Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L. H., & Aerts, H. J. W. L. (2018). Artificial intelligence in radiology. Nature reviews. Cancer, 18(8), 500–510. https://doi.org/10.1038/s41568-018-0016-5. These cutting-edge techniques have propelled the field of medical image analysis forward at a rapid pace, transforming the traditional practice of visually assessing medical images for disease detection, characterization, and monitoring. Id. AI methods have exhibited unparalleled capabilities in automatically recognizing intricate patterns in imaging data, providing quantitative assessments of radiographic characteristics that transcend the limitations of qualitative analysis. Id. Consequently, such AI/ML methods enable healthcare providers to simultaneously evaluate multiple data points while leveraging knowledge from prior data to achieve precise and expedient identification and diagnosis of imaging anomalies. Id. In essence, these programs empower physicians to expeditiously assimilate microscopic details from patient images, thereby affording the highest standard of patient care.

However, it is incumbent upon us to acknowledge that, despite their momentous progress, AI and ML are not impervious to limitations. Distinctive from most traditional clinical decision support software, certain medical AI systems may convey results or recommendations to the care team without elucidating the underlying rationale. Burrell J. How the machine “thinks”: understanding opacity in machine learning algorithms. Big Data Soc. 2016; 3:2053951715622512. doi:10.1177/2053951715622512. Additionally, AI may be trained in suboptimal environments, employ imperfect methodologies, or rely on incomplete datasets, potentially compromising the reliability of its outputs. Id. Even under optimal training conditions, instances may arise where AI algorithms misinterpret radiological images or err in suggesting appropriate drugs or dosages, thereby inevitably precipitating adverse patient outcomes and potential legal repercussions. See supra Price WN. Hence, it becomes imperative for healthcare providers to proactively implement measures that safeguard patients, mitigate liability, and concurrently leverage the vast potential of AI and ML programs to optimize and elevate the quality of care afforded to patients.

Traditional medical malpractice

Today, it is crucial for medical practitioners to adhere rigorously to the standard of care to safeguard themselves from potential liabilities associated with the care they provide. 21 Causes of Action 1 (Originally published in 1990). In medical malpractice cases, the burden lies with the plaintiff to prove two key aspects: the physician's departure from the accepted standard of care practiced by peers in the same field, and the direct link between the alleged negligence and the resulting injury. See Taylor v. Clement, 832 So. 2d 1089 (La. Ct. App. 3d Cir. 2002). To determine whether the standard of care was violated, the jury must carefully compare the defendant's actions with what a competent and cautious physician would have done in similar circumstances. See Wallbank v. Rothenberg, 2003 WL 30427 (Colo. Ct. App. 2003).

Expert testimony plays a vital role in most medical malpractice cases, allowing the plaintiffs to demonstrate a breach of the required standard of care or skill by the healthcare provider, leading to the injury. See Harmon v. Rust, 420 S.W.2d 563 (Ky. 1967). These expert witnesses, often fellow professionals in the medical field, help establish the appropriate standard of care for the specific situation and assess whether the defendant physician displayed the necessary competence and diligence. See Nunley v. Shanableh, 8 So. 3d 116 (La. Ct. App. 5th Cir. 2009).

Both plaintiffs and defendants rely on credible scientific evidence from peer-reviewed medical literature, widely recognized within the medical community, to support or refute claims of breaches in the standard of care, depending on which side they represent. See N.C. v. Premera Blue Cross, 2:21-CV-01257-JHC, 2023 WL 2741874 (W.D. Wash. Mar. 31, 2023).

AI/ML Liability

Given the nascent stage of AI and ML adoption in medicine, establishing a robust legal framework presents significant challenges. The current legal structures governing medical malpractice, ordinary negligence, and products liability are anticipated to be extended to encompass AI and ML liability. However, navigating the liability landscape concerning AI and ML systems proves intricate, as it involves various stakeholders, including physicians, health systems, and algorithm programmers.

Physicians and health systems remain subject to traditional malpractice and negligence theories, whereas AI and ML programmers may find themselves accountable under products liability principles. In this paper, the focus will be exclusively on the impact of AI and ML programs on a physician's medical malpractice liability.

Keeping with the Standard While Using AI / ML

Education and Understanding AI/ML

While the law is yet to fully address AI and ML liability, it is expected that courts will apply existing legal principles to these technologies. Currently, if the method used by a practitioner fails to reflect his or her own best judgment, they can be found liable. Therefore, to minimize potential liability risks, physicians must undergo comprehensive training on AI and ML programs, ensuring they have a deep understanding of their functionalities. It will be crucial for physicians utilizing AI/ML programs to comprehend the mechanics behind the algorithms and the outcomes they produce. Failure to do so hampers their ability to support diagnoses as their own best judgment. Hence, education and understanding constitute the foundational steps in safeguarding against potential liabilities arising from AI and ML applications in medical practice.

Notwithstanding the use of AI/ML programs, it is essential to recognize that they are mere tools, not substitutes for medical practitioners. Courts already uphold the principle that physicians bear an independent responsibility to meet the standard of care. Mehlman MJ. Medical practice guidelines as malpractice safe harbors: illusion or deceit? J Law Med Ethics. 2012;40(2):286-300. https://doi.org/10.1111/j.1748-720X.2012.00664.x;. Current case law holds practitioners responsible for relying on technology and not informing patients about potential problems. See also, Sander v Geib, Elston, Frost Pro Assn, 506 NW2d, 107 (SD 1993). Consequently, courts will likely hold that any errors arising from AI/ML outputs do not absolve physicians from liability. Given this, Physicians will need to validate the AI/ML program's findings independently before relying on them. Id.

Furthermore, case law has demonstrated that malpractice claims can proceed against health professionals even when errors result from medical literature provided to patients or when practitioners depend on incomplete intake forms. See Bailey v Huggins Diagnostic & Rehabilitation Ctr, 952 P2d, 768 (Colo App Ct 1997). Similarly, AI and ML programs are bound by the information they are provided with, making accurate and complete medical records a necessity to avoid misdiagnoses. See supra Burrell J. Physicians bear the responsibility of ensuring the accuracy of data provided to AI/ML programs, as any negligence in this regard could lead to liability consequences.

AI and ML Use Today and Beyond:

The current landscape of AI and ML utilization in medicine entails potential liability for physicians who rely on these technologies without a proper understanding of their functioning. Even errors made by AI/ML programs, resulting from incomplete medical histories or records, can lead to physician liability. To safeguard against such liabilities, physicians are advised to view AI and ML programs as confirmatory tools rather than sole sources of care provision.

As AI/ML technology becomes increasingly integrated into healthcare, determining the appropriate standard of care for these algorithms poses a challenge. The current standard of care, primarily rooted in human capabilities of practitioners’ peers, may not fully encompass the potential of AI/ML advancements. Courts have been hesitant to rely solely on clinical guidelines for defining the standard of care, preferring individualized determinations for each case. Porter v. McHugh, 850 F.Supp.2d 264, 268 (D.D.C.2012) (simply citing to guidelines “fall[s] short of establishing a clearly defined national standard of care”). However, as AI and ML technologies excel and outperform human capabilities, the legal landscape will have to adapt to incorporate these technologies and their respective clinical guidelines into the standard of care.

Considering a hypothetical scenario where AI/ML program recommendations exceed accepted practices, the legal framework may need to shift to embrace AI and ML as valid and superior decision-making tools. When AI and ML programs are vetted through peer-reviewed scientific medical literature and recognized by the medical community for their exceptional performance, relying on them to deviate from the known standard of care might become a defense against liability.

Moreover, failing to utilize AI and ML programs to assess patients might even lead to liability for physicians in the future. While currently untouched by the courts, it is likely courts will apply current medical malpractice case law principles to cases with AI and ML. Westlaw search terms [+ “Medical Malpractice” AND + “Artificial Intelligence”] resulted in zero cases found - accessed on July 31, 2023. Courts already have ruled that physicians can be liable for malpractice if they fail to recommend and treat patients using all available equipment. See Vargas v. United States, 430 F. Supp. 3d 500 (N.D. Ill. 2019). Further, courts have held that even if physicians do not believe there is a cure likely, failing to provide all treatment options is malpractice. See Esfandiari v United States, 810 FSupp 1 (D DC 1992). Given this legal framework, the same ideas will likely be applied to AI and ML programs once such programs can offer a higher level of care than practitioners.

This transition could spark debates about when physicians should be held liable for rejecting AI recommendations. As AI and ML programs advance to consistently outperform humans, they will evolve into valuable educational resources for physicians. Once there, it is expected that Physicians will often seek to understand the reasoning behind AI and ML recommendations, rather than just overseeing outcomes, validating the technology, and uncovering overlooked information. When the field gets to a point where more physicians are seeking to understand why AI and ML programs are correct, rather than seeking to oversee the outcome, the now students will likely be liable for rejecting their teacher's recommendations.

Conclusion

The integration of AI and ML into the medical field has brought about significant advancements and promising opportunities for improving patient care. AI/ML programs have demonstrated their potential in revolutionizing medical imaging, clinical decision support, and diagnostic accuracy. However, with these technological advancements come new challenges and liabilities that must be carefully addressed.

To protect patients and limit liability, medical professionals must be educated and trained on how to responsibly use AI and ML programs. Understanding the capabilities, limitations, and potential risks of these programs is crucial to ensuring that they are employed as supportive tools in the diagnostic and decision-making processes. Moreover, physicians should continue to rely on their expertise and independent judgment, confirming AI/ML program recommendations before making critical healthcare decisions.

As the use of AI and ML becomes more prevalent in healthcare, the legal landscape must adapt to incorporate these technologies into the standard of care. Courts may need to recognize AI/ML program outputs as valid sources of medical judgment when supported by peer-reviewed scientific literature and widespread recognition within the medical community. Furthermore, the legal field should consider the possibility of AI/ML programs eventually outperforming human capabilities, leading to a shift in the definition of the standard of care.

Despite the potential challenges and uncertainties, the continued development and adoption of AI and ML in the medical field hold great promise for advancing patient care and outcomes. By embracing these technologies responsibly and considering their legal implications, medical professionals can effectively leverage AI and ML to provide safer, more efficient, and more precise healthcare services.

In conclusion, the journey of AI becoming the standard in medicine is ongoing, and it requires collaboration between medical practitioners, technologists, and legal experts to ensure that these transformative technologies serve the best interests of patients while safeguarding the integrity and ethics of medical practice. By staying informed, adhering to established standards, and maintaining a forward-looking perspective, the medical community can embrace the opportunities that AI and ML offer to create a brighter and healthier future for patients around the world.

*Larson King LLP

Written by:

DRI
Contact
more
less

DRI on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide