The Need for Comprehensive Governance of AI Use in Health Care Settings

Brooks Pierce
Contact

Brooks Pierce

A recent survey of more than 100 physicians across more than 200 specialties found that 67% use artificial intelligence (AI) tools daily in their practice, whether for scheduling, diagnostics, patient care, billing, or electronic health record management (Offcall). Yet 81% of surveyed physicians were dissatisfied with their employer’s AI adoption speed, often resorting to unauthorized or unendorsed AI tools. This use of "shadow AI" presents a significant regulatory compliance risk. Health care organizations should prioritize the establishment of formal policies outlining permissible and impermissible uses of AI.

Regulatory Risk

Examples of providers uploading medical information into AI platforms are plenty, often with positive results: AI tools in intensive care units have predicted the onset of sepsis hours before clinical symptoms appear; tools used in mammography screening have identified early signs of breast cancer at accuracies often exceeding the capabilities of human radiologists; and tools used to examine brain scans of stroke patients have been found to be “twice as accurate” as human neurologists. As AI is implemented more broadly across health care, the standard of care by which clinical judgments are assessed for liability purposes is very likely to evolve.

However, without sufficient guardrails, providers who use AI risk implicating a range of state and  federal privacy laws. Every state has introduced legislation related to AI, and multiple states have enacted legislation related to its use in health care specifically. California, for example, requires that written or verbal communications to patients that are generated by AI include specific disclosures of such use, with violations subject to medical board enforcement (Cal. Health & Safety Code § 1339.75). Utah requires similar disclosures, with violations subject to fines and penalties (Utah Code § 13-75-103).

HHS’s enforcement of HIPAA in the AI context is evolving. The agency published a proposed rule last year clarifying that ePHI created, received, maintained, or transmitted via AI platforms is covered by HIPAA, and therefore requires certain risk management activities such as monitoring these tools for vulnerabilities. The proposed rule is on HHS official regulatory agenda for May 2026.

Improper use of AI tools in the healthcare space thus can trigger a range of civil penalties and other enforcement actions – not to mention the risk of reputational harm and private litigation. As an example, Verily, an AI platform provider, was recently the subject of a whistleblower lawsuit alleging the company failed to report HIPAA breaches affecting more than 25,000 patients and experienced an 80% drop in its valuation as a result.

Comprehensive Governance

To reduce the various risks associated with increased use of AI tools, health care organizations should establish comprehensive AI governance policies in consultation with experienced legal counsel. Critical first steps should involve, at minimum:

  • Completing a comprehensive audit of how AI is already being used within the organization and by the organization’s business associates;
  • Assessing the organization’s AI-related obligations arising out of federal law and applicable state law—with special attention to obligations that may differ across jurisdictions;
  • Developing risk management protocols that include limiting use of unauthorized AI tools;
  • Examining insurance policies to determine the extent of coverage for the use of AI tools;
  • Reviewing employment agreements and business associate agreements for any changes warranted by the incorporation of AI into providers’ or business associates’ practices;
  • Designating a high-level administrative employee with sufficient knowledge of AI to oversee the organization’s AI governance;
  • Establishing consequences for violations of the organization’s AI policies; and
  • Educating providers, business associates, and other staff about the AI policies, including thorough trainings focused on compliant and ethical AI use in health care.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Brooks Pierce

Written by:

Brooks Pierce
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

Brooks Pierce on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide