Regulatory considerations on artificial intelligence for health of the WHO

Hogan Lovells
Contact

Hogan Lovells

The World Health Organization ("WHO") published key principles for regulating artificial intelligence (AI) for health on 19 October 2023. The document, titled ‘Regulatory Considerations on Artificial Intelligence for Health’ is intended to guide governments and regulators in creating or updating policies on AI at national or regional levels. The publication highlights the potential of AI in improving health outcomes, as well as the challenges and risks of using AI for health, such as ethical, legal, and human rights issues.


In its new publication based on the work of the Working group on Regulatory Considerations ("WG-RC") consisting of multiple stakeholders a regulatory authorities, policy-makers, academia and industry in general, the WHO generally recognises the potential of AI for health and acknowledges the wide range of applications that AI systems can have in this area. For instance, AI can be used to accelerate medical research as well as drug discovery and development. AI systems can also have practical benefits in immediate patient care. In particular, AI decision-making can potentially facilitate a faster diagnosis, as well as predict and prevent diseases and care risks at a precise and individual level.

However, the WHO also states that this rapidly developing use of AI for health should be accompanied by appropriate regulation to ensure the safe and effective distribution of these tools. To manage the risks associated with AI systems, the publication highlights the following 6 topic areas that the WHO considers most important for regulation:

  • Documentation and transparency;
  • Risk management and artificial intelligence systems development lifecycle approach;
  • Intended use and analytical and clinical validation;
  • Data quality;
  • Privacy and data protection;
  • Engagement and collaboration.

The WHO sees the main risk in the use of AI as the potential for bias arising from the data on which the AI has been trained. If an AI is trained only on data from a particular group, it will produce results that are only relevant to that group. This could have serious consequences for people who are not accurately represented in the training data. The WHO therefore suggests that the diversity of populations should be carefully and intentionally considered when developing an AI for health purposes. It calls on regulators to provide clear guidance on these processes.


WHO's regulatory recommendations

In the publication, the WHO provides for 18 key recommendations on what exactly such regulatory guidance should take into account:

In particular, it suggests a transparent development process that focuses on mitigating risks as early and as well as possible. For example, developers should specify and document the intended medical purpose of the AI system, as well as the selection and use of datasets, reference standards, parameters, metrics and any deviations from the original plans. Typical risks associated with health-related AI systems, such as cybersecurity threats and algorithmic bias, should also be considered during development. In addition, there should be a basic understanding of relevant data protection regulations to ensure that the AI meets standard data protection requirements.

In terms of datasets, the WHO recommends regularly testing the AI's performance on an external validation dataset that is representative of the population as a whole and independent of the training dataset. The WHO also recommends early troubleshooting at the beginning of development to identify data quality issues at an early stage. Before the AI is made available, the WHO advises rigorous evaluations so that biases and errors arising from the training data can be identified and addressed.

The WHO also brings up the subject of post-deployment evaluation. First, it recommends, that the operation of the AI system be accompanied by a compliance programme, that “addresses risks and develops privacy and cybersecurity practices and priorities that take into account potential harm and the enforcement environment”. With regard to high-risk AI systems, the WHO advises an intensive monitoring after the AI has been made available to the market. This should be achieved through “post-market management and market surveillance”. High-risk AI systems are those that have an adverse impact on human safety or their fundamental rights. It can be expected that most of the AI systems in the medical field will be high-risk in nature because of the health-based information they will have to deal with. 

The WHO also highlights the importance of collaboration in this field. The WHO notes that if key stakeholders in AI innovation share their ideas and experiences, practice-changing advances in AI could potentially be accelerated. In addition, making this kind of information openly available could potentially streamline the regulatory oversight process for AI development in health.

Furthermore, the WHO believes that it should be possible to share good-quality datasets to create a diverse and functioning data ecosystem. To achieve all this, the WHO suggests the creation of platforms where developers can connect with each other and exchange relevant information.

The document also acknowledges that there are still many unresolved issues and gaps in the current regulatory frameworks for AI for health, such as the lack of harmonized standards, definitions, and methodologies; the need for more evidence-based research and evaluation; the challenges of ensuring equity and inclusiveness; and the potential conflicts between different ethical values and human rights. Therefore, the document calls for further dialogue and collaboration among stakeholders to address these issues and develop more robust and adaptable regulatory mechanisms for AI for health. Therefore, the WHO is seeking further dialogue and broad international collaboration on AI regulations and standards to ensure a safe and consistent framework for the distribution of AI. According to the publication, this should speed up the development of the regulatory landscape, improve the consistency of regulations and support countries with less regulatory capacity.


Next steps

The WHO's recommendations for AI for health are not binding, and it remains unclear how and to what extent they will be adopted by the relevant authorities in their regulatory projects. However, we can anticipate more regulation in this area, especially with the European Union's AI Act, which is the first comprehensive set of rules for AI in the world, and is expected to enter into force in the next few years.

Supported by Elisabeth Hertel.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide