Beyond the binary: How NIST is shaping the policies and practices of AI innovation and risk management

Eversheds Sutherland (US) LLP

On September 24th and 25th, the National Institute of Standards and Technology (NIST) convened a symposium1 to generate new insights about the next steps needed to unleash AI innovations that will enable trust in this technology. The event highlighted the recent body of work that NIST has built on its foundational AI Risk Management Framework (AI RMF). The goal of the symposium was to foster increased awareness, coordination and collaboration across industry, civil society2 and government on AI standards and risk management approaches to guide and advance responsible innovation throughout the AI ecosystem.

Three key themes emerged from the conversations at the symposium:

  1. We must extend our approach to risk management beyond the binary relationship between developers and deployers to address and mitigate risks at each link along the entire AI value chain.3
  2. International standards for AI are coming, just not all at once.
  3. AI defies traditional models of regulation, whether horizontal or vertical, comprehensive or targeted.

Theme 1: Beyond Developers and Deployers – Expanding Our Understanding of Risk in the AI Value Chain

Risk across the AI ecosystem is more broadly distributed than between just the developer and the deployer of an AI system. Especially for generative AI and machine learning (ML) tools, different actors introduce new sources of potential risk as layered processes of data curation, model training and application development are needed to produce marketable AI products. NIST’s AI RMF provides organizations a function-based approach for aligning and assigning their risk management activities at each stage of a particular AI solution’s lifecycle.

Panelists at the symposium pointed to NIST’s AI RMF Playbook as an effective resource for tailoring the framework’s principles to the processes and functions companies have for deploying their AI systems. They described that while complexity and variety generally define the AI value chain, AI systems development largely tracks a predictable and sequential process, where identifying who should assume responsibility for risk management can often be predetermined. This reinforces the importance of systemic governance plans and oversight frameworks as a reliable methodology to anticipate potential risks and mitigate likely harms.

Detailed contracts will be a crucial first step to clarifying oversight, evaluation and correction roles for AI system deployments, but the vast array of interactions between end-users, platform and software providers and other infrastructure actors in the AI technology stack will require more complex governance models.

Organizations were urged to embrace sociotechnical risk evaluations to better understand the real-world and human impacts of their AI portfolios. Especially with a lack of consensus surrounding the technological standards and scientific measures for safety, accuracy, and effectiveness of AI systems—focusing just on one aspect, such as model or application capability, creates a social, and potentially legal, blind spot in proactive risk management efforts.

Theme 2: Waves of AI Standards are Coming

With the rapid increase of AI systems and solutions entering the marketplace, there is a clear need for standards to assess reliability and promote interoperability. NIST in its Plan for Global Engagement on AI Standards published in July 2024, introduced priority topics for international standards development and urged the expedited development of standards where scientific consensus already exists. The core topics NIST has prioritized and recognized as ripe for standardization are:

  1. Terminology and Taxonomy
  2. Measurement, Methods and Metrics
  3. Mechanisms for Enhancing Awareness and Transparency About the Origins of Digital Content
  4. Risk-Based Management of AI Systems;
  5. Security and Privacy
  6. Transparency Among AI Actors About System and Data Characteristics
  7. Training Data Practices
  8. Incident Response and Recovery Plans

While issues like explainability and interpretability remain front of mind for researchers and policymakers, NIST and the experts at the symposium recognized that these issues currently lack the robust scientific foundation necessary to build consensus standards. Similarly, measuring the environmental impact of an AI system’s lifecycle was singled out for discussion by NIST in its strategic plan, as well as at the conference, but the dominant view shared by panelists and experts is that there is a lack of clarity and transparency in available data necessary for reaching reliable scientific conclusions about the climate risks of AI.

Industry and civil society leaders at the symposium both coalesced around a tiered, prioritized approach for organizations like ISO (International Organization for Standardization), INCITS (InterNational Committee for Information Technology Standards) and IEEE (Institute for Electrical and Electronics Engineers) that are working on these issues. Participants also recognized the importance of broadening the tent of stakeholders involved in standards development to increase the credibility and durability of the community’s work. While a comprehensive standards framework is unlikely to emerge in the immediate future, piecemeal efforts to harmonize standards for AI development focused on definitions, testing, evaluation, verification and validation are on the horizon and will come in waves.

Theme 3: AI Regulation and Standards Might Need to Break the Mold

Efforts to regulate AI in the US are generally fragmented, with regulation narrowly focused on specific economic sectors (e.g., insurance, securities, energy), particular outputs (e.g. synthetic content or deepfakes) or isolated moments in the AI lifecycle (e.g. training). The lack of a comprehensive federal AI law means that businesses increasingly must comply with a variety of state laws and regulations, which can be complex and inconsistent.

Panelists at the symposium grappled with the idea that AI might span too many basic technologies, and its applications may bridge too many industries, for traditional horizontal or vertical regulations to be effective. They discussed how AI stresses the seams between data and cloud, compute and user-interface, cyber and privacy, and proprietary software and vendor-managed solutions. The NIST conference also demonstrated the expanse of the AI research agenda – highlighting how tools for data curation, content provenance and bias-detection can be just as sophisticated and complicated, and no less essential, as the rules shaping the scale and limits on new frontier and foundation models.

NIST and other government agencies are intent on operationalizing and implementing the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110). Shorly after the NIST Symposium, the White House Office of Management and Budget (OMB) released M-24-18, a detailed guidance memo detailing the federal procurement policies for agency acquisitions of AI.4 On the international stage, the State Department has published a Global AI Research Agenda and an AI in Global Development Playbook to help shape the future of AI diplomacy, sustainable development and global cooperation on AI’s equitable and responsible deployment around the world.

In addition to US voices, the Symposium showcased the global attention to and demand for coherent, interoperable frameworks and collaborative initiatives in developing best practices for AI risk management. An OECD representative discussed its catalog of metrics and tools for trustworthy AI,5 while officials from Japan and Singapore discussed ways their governments have interfaced with the NIST AI RMF in developing “crosswalks” and refining their own national frameworks for AI governance.6 A US diplomat also spotlighted the inaugural convening of the International Network of AI Safety Institutes to be held this November in San Francisco, which will continue advancing the progress of global coordination and collaboration on AI governance.

Conclusion:

The NIST Symposium demonstrated why approaches to risk management models must move beyond the binary relationship between developers and deployers to reflect the complexity and variety of entities and actors in the AI value chain. Companies and organizations should anticipate that laws and regulations for safeguarding AI could take new forms and approaches that break the current sector-based or even risk-based mold. We can also expect that the pace of international AI standards development will accelerate and that standards focused on testing, evaluation, verification and validation are on the horizon and will come in waves.

__________

1 Recordings, transcripts and the agenda from the Symposium are available on the NIST event website: https://www.nist.gov/news-events/events/2024/09/unleashing-ai-innovation-enabling-trust
2 "Civil society" refers to the collective of non-governmental organizations, institutions, and individuals that operate independently from the government to express and represent a wide range of public and societal interests.
3 The “AI Value Chain” is understood as the range of technological resources, organizational processes and business solutions that together are all necessary to develop, deploy and maintain the software, hardware and user interfaces of an AI system.
4 OMB had previously issued M-24-10, which required all federal agencies to develop and issue a compliance plan for ensuring responsible AI governance, innovation and risk management across the government, and prioritizing the rights and safety of the public.
5 OECD, in collaboration with several other governmental and non-governmental organization, has made a searchable database of resources, tools, metrics and guides available to the public to help facilitate how AI actors are developing and using trustworthy AI that aligns with principles of fairness, transparency, explainability, robustness, safety and security.
6 Both Japan and Singapore in coordination with NIST have published resources that translate and connect the concepts, principles and processes in the AI RMF to their own domestic guidelines and risk management frameworks.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Eversheds Sutherland (US) LLP

Written by:

Eversheds Sutherland (US) LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Eversheds Sutherland (US) LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide