New AI Guidance: NIST Reveals First Version of AI Risk Management Framework

Akin Gump Strauss Hauer & Feld LLP
Contact

Akin Gump Strauss Hauer & Feld LLP

[co-author: Joseph Hold]

The National Institute for Standards and Technology (NIST) recently unveiled the first version of its Artificial Intelligence Risk Management Framework (AI RMF 1.0, or “Framework”). This highly anticipated and detailed Framework is intended as a voluntary guide for designing, developing, using and evaluating AI-related products and services with trustworthiness considerations in mind. Organizations can make use of this Framework to better prepare for the unique and often unpredictable risks associated with AI systems. Although there are no legal requirements for implementation of the Framework, it will likely be used to assess reasonableness of AI technology, viewed in parallel with the Blueprint for an AI Bill of Rights in the U.S. (discussed here) and the European Union’s (EU) Artificial Intelligence Act (discussed here).

The Framework is divided into two parts, with part 1 describing the intended audience, explaining how organizations can best frame AI risk, and outlining what trustworthy AI systems look like.1 Trustworthy systems, according to the Framework, are characterized by valid, reliable, secure, accountable, transparent AI that is explainable and privacy enhanced, with any harmful biases managed. Part 2 of the Framework is the core of the guidance, laying out four categories of functions that organizations should adopt to address risks of AI systems.2 These functions are:

  • Govern – This function is about cultivating a risk management culture, including implementing appropriate structures, policies and processes to identify and manage AI risks. Risk management must be a priority for senior leadership, who set the tone for organizational culture, and for management, who align the technical aspects of AI risk management with organizational policies. The technical aspects of AI risk management must be aligned to organizational polices and operations. Unlike the other functions that are specific to certain parts of the AI lifecycle, this function applies to all stages of an organization’s AI risk management process.
  • Map – This function is intended to enhance an organization’s ability to identify AI risks and their broader contributing factors. After documenting the intended purposes and expectations of the AI system, the organization should weigh its benefits and risks compared to the status quo to individuals, communities and other organizations. Contextual information must be considered, along with the specific methods used to complete tasks that the AI system will support, and information on the system’s knowledge limits. The outcome of this function serves as the basis for the subsequent two functions.
  • Measure – This function uses the information identified in the Map function, employing quantitative, qualitative or mixed-method risk assessment methods and input of independent experts to analyze and benchmark AI risks and their impacts. AI systems should be analyzed for trustworthy characteristics, social impact and human-AI configurations. The outcome of this functions will serve as the basis for the Manage function.
  • Manage – This function involves allocating resources to the mapped and measured risks, on a basis defined by the Govern function. Identified risks must be managed to increase transparency and accountability, prioritizing higher-risk AI systems. After determining whether the AI system achieves its intended purpose, treatment of risks must be allocated based on their impact. The systems showing outcomes inconsistent with the intended purpose should be superseded, disengaged or deactivated. Organizations should continue to apply risk management over time as new and unforeseen methods, needs, risks or expectations emerge.

Comments on the Framework will be accepted until February 27, 2023, with an updated version set to launch in spring 2023.

1 In this Framework, “AI system” is defined as “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments” designed to operate with varying levels of autonomy.

2 Dep’t of Com., NIST, Artificial Intelligence Risk Management Framework (January 26, 2023), available at https://www.nist.gov/itl/ai-risk-management-framework.

Written by:

Akin Gump Strauss Hauer & Feld LLP
Contact
more
less

Akin Gump Strauss Hauer & Feld LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide