OMB Proposes Far-Reaching AI Risk Management Guidance Following AI Executive Order

Wiley Rein LLP
Contact

Wiley Rein LLP

On November 1, 2023, the Office of Management and Budget (OMB) released a Proposed Memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (Draft Memorandum), which aims to provide guidance and establish a set of evaluation, monitoring, and risk mitigation practices for federal agencies regarding use of artificial intelligence (AI) technology. As described by the White House, OMB will “establish AI governance structures in federal agencies, advance responsible AI innovation, increase transparency, protect federal workers, and manage risks from government uses of AI.”

This effort builds on other executive AI initiatives, including the Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework (NIST AI RMF), and is one of the first actions taken in response to the White House’s landmark Executive Order on AI, which was released just two days before OMB’s Draft Memorandum.

Below, we detail three key takeaways from the Draft Memorandum, and discuss how this substantial set of guidance may impact the private sector, including government contractors. Companies interested in weighing in with OMB have until December 5, 2023 to submit comments.

Key Takeaways from OMB’s Draft Policy

The Draft Memorandum covers three main areas: 1) strengthening AI governance; 2) advancing responsible AI innovation; and 3) managing risks from the use of AI.

  1. Strengthening AI Governance.

Consistent with the AI Executive Order, OMB proposes to direct agencies to designate a Chief AI Officer to be the single point of contact for AI oversight within the agency. The Chief AI Officer will advise agency leadership on the use of AI to advance the agency’s mission, identify the ways to mitigate the unique risks AI may present to the agency, and expand external reporting on these two items. Many agencies, especially larger agencies and agencies already utilizing AI, may already have officials tasked with overseeing AI usage. OMB’s policy would mandate those agencies establish internal mechanisms to ensure agency leadership, and that the Chief AI Officer be informed of all matters involving AI within the agency. OMB also proposes that agencies submit to OMB and publish publicly their compliance plans for complying with OMB’s guidance, as well as an inventory of AI use cases.

2. Advancing Responsible AI Innovation.

OMB proposes to mandate that federal agencies develop and publish individual AI strategies that will govern their plans for advancing AI use within the agency and identify areas of potential investment in AI infrastructure and tools. As part of those anticipated strategies, agencies would be held accountable for the responsible use of AI within the agency, including providing for sufficient data sharing, identifying gaps in the agency’s AI workforce, and updating cybersecurity processes to better align with the needs of AI systems. Where applicable, OMB expects that agencies will consider whether the agency would benefit from the use of generative AI specifically, and, if so, introduce appropriate safeguards to ensure its safe and responsible use.

3. Managing Risks from the Use of AI.

OMB recognizes the benefits AI can provide to the government and, by extension, the public, but it similarly recognizes the potential associated risks. To this end, OMB proposes to require that agencies implement “minimum practices” for what the Draft Memorandum terms “rights-impacting” and “safety-impacting” AI. If AI systems are determined to be “rights-impacting” and “safety-impacting,” agencies would be required to apply a “minimum baseline” of practices to manage risks, including requiring AI impact assessments and independent evaluations, testing AI outside of a lab setting, identifying potential discrimination and biases in the underlying algorithms and mitigating their effects on users, keeping the public informed of agencies’ implementation of AI systems and the policies that will govern those systems, seeking the public’s input on their negative interactions with AI, and notifying individuals potentially harmed by the agencies’ use of AI to identify potential remedies. Additionally, agencies are encouraged to leverage guidance such as the NIST AI RMF and the Blueprint for an AI Bill of Rights.

Potential Impact on The Private Sector

While the Draft Memorandum proposes guidance for federal agencies, it will potentially have direct and indirect impacts on the private sector. For example, OMB states that it is trying to help shape procurement procedures for AI and that it intends to develop a system for ensuring federal contracts align with the AI Executive Order. And more generally, OMB’s guidance will touch nearly every agency across the government and seeks to be a model for AI governance—both inside and outside of the government context.

As such, companies and other stakeholders in this area should share their insights now as OMB considers final guidance. Comments are due December 5, 2023.

[View source.]

Written by:

Wiley Rein LLP
Contact
more
less

Wiley Rein LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide