This post is part of MoFo’s 2025 Intersection of AI and Life Sciences blog series. In this blog series, we explore how artificial intelligence is revolutionizing research, innovation, and patient care in the life sciences. Stay tuned for expert insights regarding the impact of AI on intellectual property, licensing, contracts, regulatory policy, enforcement, privacy, and venture markets in life sciences.
January 6, 2025, FDA published a draft nonbinding guidance document setting out a framework for drug sponsors, manufacturers, and other interested parties (referred to as “sponsors”) to evaluate the credibility of AI models producing data or information intended to support:
This draft guidance is among FDA’s first written direction on what it expects from AI models. We’ve seen companies use AI more expansively throughout a drug’s lifecycle: for reduction of the number of animals in studies, for predictive modeling, in clinical trials, for regulatory drug submissions, for assistance in the selection of manufacturing locations and conditions, and for post-market surveillance. This guidance is FDA’s response to that expansion.
While the draft guidance covers pre-market and post-market activities, it explicitly does not address a sponsor’s use of an AI model for drug discovery or operational efficiencies (including drafting a regulatory submissions), unless that use impacts patient safety, drug quality, or the reliability of study results. The draft focuses on the use of AI models for drugs but mentions that it may also be relevant to medical devices, particularly those intended to be used in combination with drugs.
FDA’s Risk-Based Credibility Framework
Using a risk-based assessment framework, the draft outlines seven steps for sponsors to use when establishing and assessing the credibility of an AI model’s output. According to the draft, sponsor’s level of responsibility in establishing and assessing credibility should correspond to the risk associated with an AI model. Additionally, sponsors should tailor their credibility assessments to the specific role and scope of the AI model, which the draft refers to as its “context of use” (COU).
FDA strongly encourages sponsors to engage early with FDA through either an appropriate formal meeting or a relevant program, such as the Digital Health Technologies Program, to discuss the first three steps of the framework. These steps initially direct the company to assess and define the risk of the specific AI model. The remaining steps help the company establish the model’s credibility within the proposed use case.
Step One: Define the Question of Interest
Sponsors should define the question of interest, which describes the specific question, decision, or concern the AI model is addressing. The guidance provides an example question for an AI visual analysis system used to assess vial fill levels: “Do vials of Drug B meet established fill volume specifications?”
Step 2: Define the Context of Use (COU)
Sponsors should define the specific role and scope of the AI model, referred to as the COU, including what will be modeled, how model outputs will be used, and whether other information will be used in conjunction with the model output.
Step 3: Assess the Model Risk
Sponsors should assess whether the model is low-, medium-, or high-risk. Model risk is determined by two factors: (1) how much of the AI-derived evidence informs the question of interest, as opposed to other evidence, and (2) how significant an adverse outcome is if the AI makes an incorrect decision concerning the question of interest.
Steps 4–7: Establish Model Credibility
The last four steps provide recommendations for developing (Step 4), executing (Step 5), documenting (Step 6), and assessing (Step 7) a sponsor’s internal credibility assessment plan. These plans are for the sponsor’s benefit and may evolve over time. They will support the sponsor’s assessment of its process if the agency requests additional details.
For example, the guidance specifies that a sponsor may not need to seek early engagement from the agency when using AI for post-marketing pharmacovigilance. Instead, FDA might only require the sponsor to produce the plan upon request, such as during an inspection. FDA contemplates that more flexibility may be needed in these plans during early discussions, allowing the credibility assessment plan to be “more high-level,” with the expectation that the credibility assessment plan will gain detail through the iterative process.
Life Cycle Maintenance
The draft guidance also emphasizes the importance of life cycle maintenance regarding the credibility of AI model outputs. FDA recommends that sponsors apply a risk-based approach to life cycle maintenance and keep plans detailing any maintenance. This should include evaluating and documenting any changes to the AI model consistent with steps four through seven of the credibility assessment.
Comments and suggestions regarding the draft guidance document must be submitted by April 7, 2025. If you would like assistance in making a comment or suggestion, please reach out to our team.
[View source.]