Artificial Intelligence in Healthcare: New Avenues for Liability

Morrison & Foerster LLP
Contact

Morrison & Foerster LLP

Artificial Intelligence (AI), specifically generative AI technologies, hold significant promise for improving the healthcare industry by streamlining clinical operations, freeing service providers from mundane tasks, and diagnosing life-threatening diseases. But everyone—AI developers and users, lawmakers and enforcers alike—recognizes that AI can be used for improper purposes, too, including fraud and abuse. Indeed, government enforcers are studying AI—just as we all are—and they are already looking for ways to deter the use of AI in criminal conduct. For example, as noted in our prior client alert, Deputy Attorney General Lisa O. Monaco warned in a speech on February 14, 2024 that the Department of Justice (DOJ) will “seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI.”

In monitoring the healthcare industry, DOJ is likely to use familiar and effective tools: the Anti-Kickback Statute (AKS) and the False Claims Act (FCA). We expect enforcers will use the 2020 prosecution of Practice Fusion, Inc. as a guidepost. In announcing the settlement of criminal civil investigations, DOJ said “Practice Fusion admits that it solicited and received kickbacks . . . in exchange for utilizing its EHR [electronic health records] software to influence physician prescribing of opioid pain medications.” The Practice Fusion case showed how vendors can use algorithms to steer clinical decision-making to increase profits. This is precisely the kind of behavior that enforcers will want to deter with the use of AI.

As reliance on AI increases and AI tools are broadly deployed, companies can limit the corresponding risk of civil and criminal liability by understanding how their AI systems are using information to generate results and by testing AI-generated results. Investigators and enforcers will likely expect companies to vet AI products for accuracy, fairness, transparency and explainability – and to be prepared to show how that vetting was done. MoFo is tracking government guidance – such as the recently finalized rule issued by Department of Health and Human Services (HHS), discussed below – and anticipating the kinds of AI tools (and process deficiencies) most likely to draw government scrutiny.

Healthcare AI Use Cases

AI is already prevalent in many parts of the healthcare industry, from patient care to medical decision support and drug development. Some of the most prominent current use cases include automation of the prior authorization process, diagnosis and clinical decision support, and drug development and discovery. But what are the risks presented by these use cases?

Prior Authorization

Many companies currently rely on algorithms to make the prior authorization process more efficient and less costly. AI has the potential to approve certain insurance claims automatically, recommend lower-cost options, or refer a claim to the insurer’s clinical staff for further review. Indeed, the prior authorization process is ripe for AI intervention because prior authorization involves many time-consuming, manual steps that—in theory—should not vary greatly from claim to claim. Innovation in this space is already rife with legal controversy, however. At issue are familiar themes adapted to AI: are legitimate claims being denied by AI? Does AI take discretion away from physicians? The American Medical Association, a physician advocacy group, recently adopted a policy calling for increased oversight of the use of AI in prior authorization. Payors like United Healthcare and Humana, as well as technology provider eviCore, have all been embroiled in litigation over their use of algorithms in prior authorization. DOJ has a long history of ferreting out fraud in prior authorization and remains focused on the issue. Given recent DOJ announcements calling for increased penalties for crimes that rely on AI, it is wise to expect enforcers to look for instances where AI is being used to improperly influence the prior authorization process.

Diagnosis and Clinical Decision Support

One of the most significant use cases for AI in healthcare is aiding in diagnosis and medical decision-making. AI algorithms can analyze medical images (e.g., X-rays, MRIs, ultrasounds, CT scans, and DXAs) and patient data to help healthcare providers identify and diagnose diseases, choose treatments, and compile exam summaries accurately and quickly. AI tools have been able to detect hemorrhaging from CT scans, diagnose skin cancer from images only, identify abnormalities in chest X-rays, and even detect otherwise imperceptible indicators of heart disease from a standard CT scan.

As these tools mature, they will likely draw the interest of enforcers, who will ask how the models were trained, whether vendor compensation is related to the volume and value of referrals that the AI tools generate, and whether access to free AI tests tied to specific therapies or drugs raises anti-kickback questions. Expect many of the familiar theories of liability to find their way into AI and expect fraudsters to see AI as the newest mechanism to generate illicit gains.

These promising developments do come with some caveats, of course. A major hurdle to deploying AI in the clinical setting is patient buy-in. A Pew Research Center survey conducted in December 2022 found that 60% of Americans would be uncomfortable with their healthcare provider relying on AI to determine medical care. And it is not just patients who are concerned; physicians are often hesitant to consult AI due to concerns around malpractice liability. As with prior authorization and drug development, flawed algorithms could create liability for the provider.

Practice Fusion: New Tools for Old Tricks

The 2020 prosecution of Practice Fusion, Inc. is a cautionary tale for healthcare companies considering deploying AI tools. Federal investigators and prosecutors alleged that Practice Fusion solicited and received kickbacks from a major opioid company in exchange for modifying its EHR software to increase the number of clinical decision support (CDS) alerts physicians using the software received. In exchange for kickbacks, the vendor allowed the drug manufacturer’s marketing staff to draft the language used in the alert itself, including language that ignored evidence-based clinical guidelines for patients with chronic pain. The alerts encouraged physicians to prescribe more opioids than was medically advisable and, thus, were specifically designed to increase opioid sales without regard to medical necessity. The first case of its kind, this prosecution resulted in a Deferred Prosecution Agreement and shows how bad actors can use algorithmic decision-making software to increase profits at the expense of patient health and medical standards. AI has the potential to steer decisions just as the EHR software in this case is alleged to have done, of course, and in even more sophisticated and harder-to-detect ways. Enforcers and whistleblowers will be on the lookout for the right AI case to bring—a reality that highlights the importance of properly vetting AI vendors (as discussed further below).

Drug Development and Discovery

The pharmaceutical industry is also starting to use algorithms to assess potential drug combinations prior to running clinical studies. AI promises to shave years off traditional development timelines, which could provide patients with treatment at an unheard-of pace and dramatically alter the economics of drug development. There is a real risk that an unscrupulous drug developer or AI vendor could tweak their AI products and analysis to overstate efficacy or otherwise modify data, however, and experience has shown that clinical trial fraud is a persistent concern. Deputy Assistant Attorney General for the Consumer Protection Branch Arun G. Rao highlighted clinical trial fraud as an area of focus in remarks in December 2021 and again in December 2023. DOJ has brought several enforcement actions alleging clinical fraud, and enforcers will be especially keen to deter such activity if it involves AI. Because the federal government funds drug development through the National Institutes of Health (NIH), misrepresentations of drug efficacy could violate the FCA.

Vetting AI Products

The Practice Fusion case should prompt healthcare companies to exercise caution as they begin using and developing AI solutions. Properly vetting outside software vendors is especially critical. Unfortunately, thoroughly vetting outside AI vendors is challenging because AI vendors are hesitant to allow customers to examine the underlying technology, algorithms, and datasets used to train their AI tools. Without access to these critical inputs, it is difficult to evaluate the veracity of an AI vendor’s claims. In addition, healthcare professionals often lack the technical expertise to fully evaluate AI products and spot red flags. A simple rules-based algorithm can be made to look like an AI-driven solution, misleading providers as to how advanced the tools they are purchasing really are.

To overcome these hurdles, it is important for compliance professionals and AI users to ensure that their AI tools are “explainable,” that is: accurate, fair, and transparent. This requires asking questions that get at the issues that will concern regulators and enforcers, such as:

  • What is the vendor’s AI governance policy, what data the tool was trained on, and how was tool performance measured and validated?
  • How does the vendor safeguard confidential patient information, and what systems does the vendor have in place for managing monitoring and incident reporting?
  • Is the vendor in compliance with the recently finalized HHS rule requiring vendors of Predictive Decision Support Interventions (Predictive DSIs) to satisfy certain requirements in order to secure a critical certification from the Office of the National Coordinator for Health Information Technology (ONC)? Among other requirements, the rule requires AI vendors to disclose to ONC (1) how their software was developed, including information about the dataset the model was trained on, (2) what measures the developer used to prevent bias, and (3) how the product was validated. The rule also requires vendors to explain what use cases the tool was specifically designed for and whether the output from the software is a prediction, classification, recommendation, evaluation, analysis, or something else. While healthcare companies may consider ONC certification when evaluating an AI product, vendors are not required to meet the certification requirements until January 1, 2025.
  • Does the tool utilize AI derived from large language models (LLMs), or is it based on more rudimentary rules-based functions? A clear understanding of the technology used to drive decision-making can guide the vetting process. LLM-based generative AI tools can be more sophisticated, but they require more attention to training and are more likely to generate unanticipated outcomes.

AI brings new promise and challenges for life sciences and healthcare companies. Against the backdrop of rapidly changing technology, the best practices that healthcare companies have long relied upon to prevent fraud and abuse—including vetting, monitoring, auditing, and prompt investigation—remain valuable tools to lower enforcement risk.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Morrison & Foerster LLP | Attorney Advertising

Written by:

Morrison & Foerster LLP
Contact
more
less

Morrison & Foerster LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide