To achieve profound benefits of AI in drug development & manufacturing, FDA looks to industry for help

Hogan Lovells
Contact

Hogan Lovells

The U.S. Food and Drug Administration (FDA) recently issued two discussion papers on the use of Artificial Intelligence (AI) and Machine Learning (ML) in the development of drug and biologics, and in the manufacturing of those medicinal products. Due to the rapidly evolving nature of these potentially long-term regulatory paradigms, drug and biologic firms and AI/ML tech developers will want to strategically engage with FDA through public comments, workshops, and private meetings, in order to ensure clarity surrounding their compliance obligations associated with drug development and manufacturing processes.

The significance of these new regulatory structures cannot be understated; indeed, in an FDA Voices article announcing the publication of both discussion papers, CDER Director Patrizia Cavazzoni M.D. wrote that AI and ML “are no longer futuristic concepts; they are now part of how we live and work.” FDA is seeking comments on these discussion papers through August 9, and is relying on industry engagement to develop its AI regulatory framework.


A key takeaway from FDA’s discussion papers and recent activities associated with AI/ML is that the agency recognizes the profound potential benefits to advance drug development, manufacturing and compliance, and is open to adapting the current regulatory paradigms to make room for risk-based approaches. FDA, however, has some concerns such as the transparency of AI models and with ensuring the security and integrity of data generated from continuous learning approaches; therefore, the agency is eager to receive stakeholder input to understand AI/ML’s omnipresence in the life science industry to assess the potential risks and benefits.

In recent months, FDA has been raising its level of engagement with stakeholders on its evolving regulatory paradigms for AI in drug development and manufacturing, publishing two discussion papers on “Using Artificial Intelligence and Machine Learning in the Development of Drug and Biological Products” (May 2023), and on “Artificial Intelligence in Drug Manufacturing” (February 2023), the latter of which comes from FDA’s Center for Drug Evaluation and Research (CDER) as a part of its Framework for Regulatory Advanced Manufacturing Evaluation (FRAME) Initiative.


AI in drug development

FDA’s latest AI in drug development discussion paper includes sections covering the current landscape of AI and ML in drug development, considerations for use of AI and ML, and “engagement and collaboration” between FDA and the industry, with the agency emphasizing its intention to publish the discussion paper “as part of a multifaceted approach to enhance mutual learning and to establish a dialogue with FDA stakeholders on this topic.” In an “FDA Voices” post Dr. Cavazzoni highlighted that the agency is seeking engagement from a variety of stakeholders, including “pharmaceutical companies, ethicists, academia, patients and patient groups, and global counterpart regulatory and other authorities.”

In the discussion paper, FDA promotes using AI and ML in drug development for:

  • Drug discovery, including target identification, selection, and prioritization; as well as for compound screening and design;

  • Nonclinical research, including pharmacokinetic, pharmacodynamic, and toxicologic studies conducted in animals; exploratory in vitro and in vivo mechanistic studies conducted in animal models; organ-on-chip and multi-organ chip systems; and cell assay platforms;

  • Clinical research, including patient recruitment; selection and stratification of trial participants; dose/dosing regimen optimization; monitoring/improving adherence to trial design; participant retention; clinical trial site selection; and clinical trial data collection, management, and analysis;

  • Postmarketing safety surveillance, including case processing, evaluation, and submission; and,

  • Advanced pharmaceutical manufacturing, including process design optimization, advanced process control, smart monitoring & maintenance, and trend monitoring.


AI in drug manufacturing

As FDA establishes its regulatory paradigms for the application of its risk-based regulatory framework to the use of AI technologies in drug manufacturing, FDA issued the second discussion paper the areas for which public feedback would be valuable. The agency has been developing plans for incorporating AI into the medical device regulatory framework, and is increasing attention on applying AI models in drug manufacturing. In this second discussion paper, FDA is requesting additional feedback related to the types of AI-based models used, the elements of AI technologies in a CGMP environment, practices for validating AI models, appropriate data management, among others.

In March of this year, CDER and CBER jointly released the first discussion paper on incorporating AI and advanced manufacturing technologies into the pharmaceutical manufacturing regulatory framework. FDA recognizes there are “limited industry standards” and highlighted the following areas of concern relating to AI use in drug manufacturing:

  • Cloud applications may affect oversight of pharmaceutical manufacturing data and records, which may lead to challenges during inspections in ensuring that third-party AI software is updated with appropriate safeguards for data safety and security.
  • The Internet of Things (IOT) may increase the amount of data generated during pharmaceutical manufacturing, affecting existing data management practices. This means industry may need additional clarity for clarity regarding regulatory compliance for generated data, as well as clarity regarding “data sampling rates, data compression, or other data management approaches to ensure that an accurate record of the drug manufacturing process is maintained,” FDA says.
  • Standards for developing and validating AI models used for process control and to support release testing may require additional clarification from FDA.
  • Continuously learning AI systems that adapt to real-time data may challenge regulatory assessment and oversight, due to issues with determining when an AI model can be considered an established condition of a process, as well as challenges associating with ascertaining the criteria for regulatory notification of changes to the model as a part of model maintenance over the product lifecycle.

FDA received numerous and wide-ranging comments from stakeholders in response to the agency’s first discussion paper and request for feedback. In summary, stakeholders detailed current and intended uses of AI technologies including Chemistry, Manufacturing, and Controls (CMC) development and scale-up, advanced process control to allow dynamic control of the manufacturing process, autonomous systems for drug manufacturing, production planning, asset management, quality assurance, stability and shelf-life monitoring, document management, and supply chain optimization.

Additional takeaways from stakeholder comments include:

  • The National Institute for Innovation in Manufacturing Biopharmaceuticals (NIIMBL) wrote that it “is key to distinguish between applications where AI is being used to define the operational space, for example, ‘This is what it looks like when the process runs correctly’ and where AI is being used for control within the operational space, for example, ‘This lot was produced by a process that ran correctly.’” NIIMBL seems to be emphasizing the threshold question of whether CGMP applies to AI in the context of use. For example, if AI is being used for asset management decisions, we would not anticipate CGMP requirements to be applicable.

  • The Connected Health Initiative (CHI) encouraged harmonization with other organizations developing AI-related documents such as the International Council on Harmonisation (ICH) to help facilitate broader adoption and to avoid added complexities for manufacturers in a global environment.

  • Stakeholders were also interested in guidance for industry. Biotechnology Innovation Organization (BIO) said existing CGMP guidance need to be updated with clarification on whether the guidance applies to AI models, and invited additional guidance on AI and ML intended use, acceptability, and change management.

  • the Association for Accessible Medicines (AAM) indicated there is a critical need for the agency to “determine the areas where AI is most likely to make the greatest contribution in manufacturing and focus on clarifying the regulatory requirements applicable to AI in those areas.” AAM stated the benefits of investments in new technologies must be compelling to facilitate adoption by the generic drug industry.

In the AI in drug development discussion paper, FDA provides details on AI applications have been deployed or have potential to support CGMP manufacturing, and identifies four areas where AI/ML could be applied during the drug manufacturing lifecycle, from design to commercial production: (1) optimization of process design (e.g., digital twins approach); (2) implementation of advanced process control; (3) smart monitoring and maintenance; and (4) trending activities to analyze manufacturing-related deviation trends, cluster problem areas, and prioritize areas for proactive continual improvement.


Overarching standards and practices

FDA acknowledges the need for the agency to assess whether the use of AI/ML in the context of use introduces increased or unique risks, such as limited explainability due to a system’s underlying complexity or may not be fully transparent for proprietary reasons, and the potential to amplify errors and preexisting biases.

Adapting the overarching principles of the General Accountability Office AI accountability framework, FDA said in the discussion paper it aims to initiate a discussion with stakeholders and solicit feedback on three key areas in the context of AI/ML in drug development:

  1. human-led governance, accountability, and transparency;
  2. quality, reliability, and representativeness of data; and,
  3. model development, performance, monitoring, and validation.

To promote human-led governance, FDA urges the creation of a risk management plan that considers the context of use, in order to identify and mitigate risks associated with AI in drug development. FDA invites comments from industry on several questions, including:

  • In what uses of AI/ML in drug development is additional regulatory clarity needed?

  • what does adequate transparency in the use of AI/ML in drug development entail and what barriers are preventing this level of transparency?

  • How to incorporate risk-based, meaningful human involvement when AI/ML is being utilized in drug development?

  • How to enable adequate traceability and auditability?

  • How to maintain quality control over pre-specified activities in AI/ML for drug development?

Regarding data quality, reliability, and representativeness issues, the discussion paper spotlights how AI/ML is particularly sensitive to the attributes or characteristics of the data used for training, testing, and validation. Accordingly, FDA invites comments on data-related issues, including:

  • addressing bias, missing data, and other data quality considerations when using AI/ML in drug development
  • ensuring data privacy and security
  • reproducibility and replicability

Third, the AI in drug development discussion paper emphasizes the importance of practices of pre-specification steps (which we recently analyzed online here) and of clear documentation of criteria for developing and assessing models (which we outlined online here). As FDA weighs AI/ML model development and performance risks, it asks industry to provide feedback on these related questions:

  • What are examples of current tools, processes, approaches, and best practices being used by stakeholders for:

    • documenting the development and performance?

    • selecting model types and algorithms?

    • selecting the method for validating and measuring performance of models?

    • evaluating transparency and explainability and increasing model transparency?

    • selecting open-source AI software for AI/ML model development?

  • What practices and documentation are being used to inform and record data source selection and inclusion or exclusion criteria?

  • In what context of use are stakeholders addressing explainability, and how have you balanced considerations of performance and explainability?

  • What approaches are being used to document the assessment of uncertainty in model predictions, and how is uncertainty being communicated? What methods and standards should be developed to help support the assessment of uncertainty?

The discussion paper also considers how AI/ML is being applied to real-world data (RWD) and data from digital health technologies (DHTs) in support of drug development, which we analyzed earlier this month online here, and earlier this week online here.


Stakeholder engagement – Request for feedback

These discussion papers come after FDA’s virtual public workshop on the “Application of Artificial Intelligence and Machine Learning for Precision Medicine,” which we summarized online here. Taken together, FDA’s spate of recent actions in this arena demonstrates the agency’s commitment to support innovation by learning from the industry to help inform how AI and ML should be regulated.

Indeed, speaking earlier this month at the National Health Council’s patient engagement symposium, FDA Commissioner Robert Califf M.D. acknowledged that digital health tools are developing faster than FDA is able to regulate them. Dr. Califf cited how FDA has “multiple projects underway to generate greater volumes of high quality, relevant digital health data, including the facilitation of efficient, streamlined randomized controlled trials and registries by extending the uses of existing digital health data.”

CDER is seeking comments on its AI in drug development discussion paper through August 9, and has planned several other opportunities for the government to engage with stakeholders on AI/ML-related issues, including:

  • CDER’s initial virtual workshop on the “Regulatory Framework for the Utilization of Artificial Intelligence in Pharmaceutical Manufacturing,” scheduled for September 26-27, 2023, which is open to anyone interested in AI technologies in pharmaceutical manufacturing, and aims specifically to learn from experts.

  • FDA’s Office of Pharmaceutical Quality (OPQ)’s Emerging Technology Program (ETP): a collaborative program where industry representatives can meet with Emerging Technology Team members to discuss, identify, and resolve potential technical and regulatory issues regarding the development and implementation of a novel technology prior to filing a regulatory submission; it exists to advance good governance of transformative new technologies, such as AI and ML.

  • FDA’s “2023 Scientific Computing Days” virtual workshop is scheduled for September 12 - 13, 2023.

  • FDA’s Advancing Real-World Evidence (RWE) program, which seeks to improve the quality and acceptability of RWE-based approaches in support of new intended labeling claims, and which generally promotes research on AI/ML-based medical devices.

  • The Innovative Science and Technology Approaches for New Drugs (ISTAND) Pilot Program, which is designed to expand Drug Development Tool (DDT) types by encouraging development of DDTs that are out of scope for existing DDT qualification programs but may still be beneficial for drug development, including AI/ML-related DDTs.

The number of vehicles that FDA is offering for stakeholders to engage represent an important opportunity to help with implementing meaningful changes to existing regulatory frameworks. For an effective engagement with FDA, we recommend collaborating closely with regulatory experts and strategic advisors with deep understanding of the FDA’s current regulatory paradigms and intra-agency decision-making processes, and who also have full awareness of the company’s objectives and broader industry headwinds. 

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide