Newly Released FDA Action Plan for AI/ML-Based Software as a Medical Device (SaMD) Includes Predetermined Change Control Plan

Wilson Sonsini Goodrich & Rosati

Wilson Sonsini Goodrich & Rosati

On January 12, 2021, the U.S. Food and Drug Administration (FDA) published the Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan (action plan) in response to stakeholder feedback to their April 2019 Proposed Regulatory Framework for Modifications to AI/ML-Based SaMD - Discussion Paper and Request for Feedback (discussion paper). The action plan summarizes the discussion paper feedback and proposes responsive FDA actions regarding:

  1. Tailored Regulatory Framework, including a Predetermined Change Control Plan
  2. Good Machine Learning Practice (GMLP)
  3. Patient-Centered Approach Incorporating Transparency to Users
  4. Regulatory Science Methods Related to Algorithm Bias and Robustness
  5. Real-World Performance (RWP)

Of these, the action items related to the Predetermined Change Control Plan and RWP are likely of most interest to readers and stakeholders in the AI/ML-based SaMD space.

1) Tailored Regulatory Framework, including a Predetermined Change Control Plan
In the discussion paper, the FDA introduced a total product lifecycle (TPLC) regulatory approach to AI/ML-based SaMD, designed with consideration for the iterative, adaptive, and autonomous natures of AI/ML technologies. Under the TPLC approach, the FDA anticipates continuous product improvement through AI/ML-informed modifications to SaMD leveraging these technologies and encourages manufacturers to provide a "Predetermined Change Control Plan" in premarket submissions. The plan would include the types of anticipated modifications (called "SaMD Pre-Specifications") and describe how the manufacturer intends to implement those modifications in a controlled manner that manages risks to patients (the "Algorithm Control Protocol").

The idea of a Predetermined Change Control Plan for AI/ML-based SaMDs is noteworthy, as the FDA typically requires manufacturers to submit a new 510(k) Premarket Notification for review and clearance of any significant modification to a medical device, including software. The Predetermined Change Control Plan would allow manufacturers to pre-consider the safety and efficacy of multiple, foreseeable AI/ML-informed modifications to an SaMD, thereby significantly reducing the submission and review burdens on the manufacturer and agency, respectively. In addition, this regulatory approach would provide patients with quicker access to validated and improved technologies.

Under this approach, the FDA expects manufacturers to commit to transparency and RWP monitoring for AI/ML-based SaMDs. Manufacturers would also provide the FDA with periodic updates on what changes were actually implemented as part of the approved SaMD Pre-Specifications and the Algorithm Change Protocol.

Feedback on the discussion paper revealed strong stakeholder interest in the TPLC framework and the Predetermined Change Control Plan. Therefore, an FDA action item will be to develop an update to the proposed regulatory framework presented in the AI/ML-based SaMD discussion paper and issue a draft guidance on the Predetermined Change Control Plan.

2) Good Machine Learning Practice (GMLP)
The FDA expects AI/ML-based SaMD to plan for and demonstrate analytical and clinical validation in accordance with Good Machine Learning Practice (GMLP). As defined in the discussion paper, GMLP are "those AI/ML best practices (e.g., data management, feature extraction, training, and evaluation) that are akin to good software engineering practices or quality system practices." Examples of GMLP considerations for SaMD include:

  • Relevance of available data to the clinical problem and current clinical practice;
  • Data acquired in a consistent, clinically relevant and generalizable manner that aligns with the SaMD's intended use and modification plans;
  • Appropriate separation between training, tuning, and test datasets; and
  • Appropriate level of transparency (clarity) of the output and the algorithm aimed at users.

There is currently no standardized GMLP to guide manufacturers of AI/ML-based SaMDs. However, there are numerous community efforts related to GMLP and consensus standards development. In light of strong stakeholder support for GMLP, an FDA action item will be to encourage the harmonized development of GMLP through participation in these collaborative communities. GMLP development efforts will be pursued in close collaboration with the FDA's Medical Device Cybersecurity Program.

3) Patient Centered Approach Incorporating Transparency to Users
The FDA recognizes the need for a patient-centered approach to the development and utilization of AI/ML-based devices. To that end, in October 2020, the FDA held a Patient Engagement Advisory Committee (PEAC) meeting focused on patient trust in AI/ML technologies.

Using the data gathered at the PEAC meeting, the FDA proposes to host a public workshop to share learnings and elicit input from the broader community on how device labeling supports transparency to users. The FDA will consider this input alongside the PEAC findings to identify the types of information manufacturers should include in the labeling of AI/ML-based medical devices to promote transparency to users. Moving forward, the FDA plans to participate in community efforts geared at promoting transparency of and trust in AI/ML-based medical devices.

4) Regulatory Science Methods Related to Algorithm Bias and Robustness
The action plan documents strong stakeholder interest in developing methods to evaluate and address algorithmic bias and resilience with respect to diverse patient populations and ever-changing clinical inputs and conditions. The FDA notes that, because AI/ML systems are developed and trained on historical datasets, they may internalize implicit biases present in the data and deploy them at scale—a phenomenon called "algorithmic bias." In addition, healthcare delivery is also known to vary according to demographic factors, adding a second dimension of challenge to fair and effective use of AI/ML in the medical device space.

For example, a study by Obermeyer et al. published in Science in October 2019 found evidence of racial bias in a widely used healthcare algorithm, such that the algorithm prioritized white patients over black patients for extra medical care.1 Although race was not an explicit factor in the algorithm's decision-making, the algorithm used health costs, including historical healthcare expenditures, as a proxy to evaluate a patient's care requirements. The correlation between race and economic inequalities—including with respect to healthcare expenditures—created an implicit bias in the algorithm's dataset that skewed performance of the algorithm towards white patients.

Given the compounded risks of bias and inequity present in AI/ML-SaMD, the FDA is collaborating with its university-based Centers for Excellence in Regulatory Science and Innovation to develop methods for the identification and elimination of bias in machine learning algorithms.

5) Real-World Performance (RWP)
The FDA's proposed TPLC regulatory approach seeks to embrace the iterative improvement power of AI/ML-based SaMD while assuring that patient safety is maintained. To meet these goals, the FDA provides that modifications to SaMD applications may be supported by collection and monitoring of real-world performance data. The FDA suggests that collecting real-world data may allow manufacturers to better understand how their products are being used, identify risks or opportunities for improvements, and respond proactively to safety or usability concerns. This action is generally welcomed by manufacturers of digital health products or smart wearables that employ sensors to collect real-time physiological or environmental data, as they are natural fits for this type of data generation.

In addition, the FDA states in the action plan that manufacturers may leverage real-world data collection and monitoring as a risk-mitigation mechanism in support of the benefit-risk profile assessed for a particular AI/ML-SaMD marketing submission.

As might be expected, the notion of real-world data collection and reporting to the FDA raised numerous stakeholder questions. To develop a better understanding of the implications of real-world data collection for AI/ML-based SaMD development, the FDA plans to support voluntary real-world performance pilots in coordination with stakeholders and other FDA programs.


The FDA recognizes that AI/ML-based SaMD is a rapidly progressing field playing an ever-expanding role in healthcare. Stakeholders should look for and implement forthcoming FDA guidances, including those published by the newly established Digital Health Center of Excellence, a new group housed within the Center for Devices and Radiological Health and tasked with innovating regulatory approaches and promoting collaborative initiatives for digital health advancements such as AI/ML-based devices. Stakeholders are encouraged to provide feedback through the public docket (FDA-2019-N-1185) at, or directly to the Digital Health Center of Excellence at

[1] Obermeyer et al., Dissecting racial bias in an algorithm used to manage the health of populations, 366 Science, Oct. 25, 2019, at 447.

Written by:

Wilson Sonsini Goodrich & Rosati

Wilson Sonsini Goodrich & Rosati on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide

This website uses cookies to improve user experience, track anonymous site usage, store authorization tokens and permit sharing on social media networks. By continuing to browse this website you accept the use of cookies. Click here to read more about how we use cookies.