The White House AI Executive Order: reshaping AI use in the health sector

Hogan Lovells
Contact

Hogan Lovells

President Biden’s groundbreaking Executive Order on artificial intelligence carries significant implications for the health and life science industry. The Order tasks federal agencies, including those responsible for health industry oversight, with developing responsible AI guidelines and meaningful measures to regulate and assess its use. Although the Order focuses largely on the government’s role in the use of AI, the impact will be felt throughout the industry given the standard setting that it will spark. 


For the health and life science sector, key developments include:

The HHS AI Task Force and Standard Setting

As outlined in our previous publication, the Order directs various federal agencies to set new standards for AI safety and security, safeguard Americans’ privacy, advance equity and civil rights, support consumers and workers, and promote innovation and competition. Recommendations put forth by these agencies will carry weight and influence the development of industry standards and potentially regulations in this rapidly developing area. A number of the agencies regulate the health and life sciences sector.

Notably, among the agencies tapped for standards setting is the U.S. Department of Health and Human Services (HHS). Under the Order, HHS will establish an AI Task Force that will develop a strategic plan for the responsible use of AI and AI-enabled technologies in health care. At a minimum, the plan will address:

  • Evaluating the use of predictive and generative AI in health care delivery and financing, emphasizing the need for human oversight.
  • Monitoring the long-term safety and real-world performance of AI technologies, and the communication of updates to regulators, developers, and users.
  • Incorporating safety, privacy, and security standards into software development for the protection of personal information.

Organizations in the health and life science industry may monitor the task force’s activities and develop their own processes to evaluate issues related to the task force’s priorities. For example, developers or users of AI models that process personally identifiable health information can put in place processes for evaluating the privacy impact of those models and the safeguards in place for securing such data. That work can help position the organization to adapt more readily to standards emerging from the task force.

Acute Impact for Government-funded Research 

Organizations that participate in government-funded projects will be among the first to feel the Order’s impact. For example, the Order directs HHS to prioritize grantmaking and awards on the responsible development and use of AI. Some of the same issues central to the AI Task Force’s strategic plan – like appropriate implementation of safety, privacy, and security standards – likely will be part of the decision-making process. And as agencies implement the Order’s directives, they may place a strong emphasis on protecting sensitive research data, including through stringent protocols for data encryption, access controls, and secure storage to safeguard against unauthorized access or data breaches. Government-funded entities will be incentivized to thoroughly screen AI models and implement practices to protect sensitive data processed by those models. To secure and maintain federal funding, organizations also will need to be prepared to integrate agency standards and guidelines into their research and development processes.

Considerations for AI Developers

AI developers should be aware that the Order that may foreshadow government action, which need not await the new standards directed by the Order. In fact, some agency efforts preceding the Order are already having an impact. HHS, including through the Office of the National Coordinator for Health Information Technology, has proposed rules regulating the use of algorithms in clinical decision-making and predictive decision support interventions within health care.

The Order underscores the federal government’s commitment to guarding against negative consequences from the use of AI in health care, including through the use of existing tools. For example, the Order signals that regulators will be monitoring new health and safety risks introduced by AI and makes clear that the federal government will enforce consumer protection laws and principles in various industries, including in health care, and will not tolerate discrimination and bias in health care. These signals align with other developments in the evolving data and consumer protection environment in the U.S., such as the emergence of state consumer privacy laws that regulate certain forms of automated data processing. And, for developers, they serve as a reminder on several key considerations:

  • Impact assessments should evaluate active mitigation of biases that could result in discrimination against individuals based on factors such as race, gender, or socio-economic status.
  • Stakeholders are keenly focused on principles of transparency, accountability, and fairness in AI algorithms to prevent issues such as fraud, unintended bias, and discrimination within health care applications.
  • In light of the volume and sensitivity of data involved, sophistication of the technologies used, and dynamism of cyber threats, regulators’ expectations around the protection of individual privacy may be heightened.

Next Steps

It’s important to identify the regulatory agencies with oversight over your business and carefully watch AI developments within those agencies. Monitoring standards set, and recommendations made, by these agencies can help organizations more swiftly align operations, technologies, and processes with emerging industry practices and regulations, secure government-funding, and remain a trusted actor in the evolving AI health care landscape.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide