The National Institute of Standards and Technology (NIST) published its Artificial Intelligence Risk Management Framework (NIST AI 100-1) in January 2023.
The NIST AI Framework consists of 19 categories and 72 subcategories within the following four core functions:
- Govern
- Map
- Measure
- Manage
In prior articles, we focused on considerations when assessing and implementing the Govern, Map, and Measure functions within the NIST AI Risk Management Framework. In this article, we focus on the implementation of the Manage function of the NIST AI Risk Management Framework.
The Manage function includes four categories and 13 subcategory controls as listed in Table 1 below.
How can organizations use the NIST AI Risk Management Framework Controls to assess activities that involve AI systems for the Manage function?
Along with the NIST AI Risk Management Framework, NIST also provided the AI Risk Management Playbook which contains supporting actions and considerations for each subcategory control.
Below are example questions to focus on when assessing an organization’s current AI compliance posture relative to the Manage function within the NIST AI Risk Management Framework:
- How do the technical specifications and requirements align with the AI system’s goals and objectives?1
- What assessments has the entity conducted on data security and privacy impacts associated with the AI system? 2
- Does your organization have an existing governance structure that can be leveraged to oversee the organization’s use of AI? 3
- Has the system been reviewed to ensure the AI system complies with relevant laws, regulations, standards, and guidance? 4
- Did your organization implement a risk management system to address risks involved in deploying the identified AI solution? 5
What should companies consider implementing to support alignment with the NIST AI Risk Management Framework Measure function?
After assessing and documenting activities that involve AI systems against the Manage function, below are examples of AI compliance management activities to assist organizations in implementing remediating gaps or demonstrating privacy readiness and maturity:
- Regularly track and monitor negative risks and benefits throughout the AI system lifecycle including in post-deployment monitoring.6
- Assign risk management resources relative to established risk tolerance. AI systems with lower risk tolerances receive greater oversight, mitigation, and management resources.7
- Identify risk response plans and resources and organizational teams for carrying out response functions.8
- Document residual risks within risk response plans, denoting risks that have been accepted, transferred, or subject to minimal mitigation.9
- Identify resource allocation approaches for managing risks in systems deemed high-risk.10
The Manage function is focused on controlling for risks associated with AI systems including notifying individuals in the organization of such risks and tracking such risks through remediation. The Manage function aligns with Article 61 of the EU AI Act regarding post-market monitoring. Under this article, providers should establish a post-market monitoring system to actively and systematically collect, document, and analyze relevant data for the performance of high-risk AI systems throughout their lifetime and ensure continual compliance with requirements set out in Chapter 2 of the EU AI Act.
Notes:
1. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 174.
2. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 176.
3. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 176.
4. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 178.
5. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 179.
6. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 174.
7. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 176.
8. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 178.
9. NIST AI 100-1. NIST AI RMF Playbook. January 2023. Page 180.