OMB Releases Draft Guidance to Agencies on Implementing Biden’s AI Executive Order — AI: The Washington Report

Mintz - Antitrust Viewpoints
Contact

Mintz - Antitrust Viewpoints

[co-author: Raj Gambhir]

Welcome to this week's issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.

The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates have exponentially increased the federal government’s interest in AI and its implications. In these weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities.

This issue covers key developments coming in the aftermath of President Biden’s October 30, 2023 executive order (“EO”) on AI. Our initial takeaways are:

  1. The Office of Management and Budget has released a draft memorandum providing guidance to agencies on implementing Biden’s executive order. The guidance would direct agencies to appoint a Chief AI Officer, invest in the infrastructure needed to leverage AI, and implement certain oversight and testing regimes when using AI in a manner that impacts rights or safety. The comment period on the draft memorandum closes on December 5, 2023. 
  2. Representatives of the United States, including Vice President Harris, attended the United Kingdom’s AI Safety Summit and joined leaders from almost 30 nations and international organizations in signing the Bletchley Declaration. The signatories of the Bletchley Declaration resolved “to work together in an inclusive manner to ensure human-centric, trustworthy and responsible AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI.”
  3. Delivering remarks during the AI Safety Summit, Vice President Kamala Harris emphasized the need for AI regulation to address not only the threats that AI poses to humanity as a whole but also to individuals, especially those from marginalized backgrounds. 
     

Off to the Races with Biden’s AI EO

“The FTC is firing on all cylinders,” said Federal Trade Commission (“FTC”) Chair Lina Khan during a November 2 speaking engagement at Stanford University. Buoyed by the explicit encouragement given by Biden’s October 30 AI executive order (“EO”) to consider exercising FTC authority “to ensure fair competition in the AI marketplace,” Khan reiterated that there “is no exemption from the laws on the books” for AI, and that the FTC will be “clear-eyed in ensuring that claims of innovation are not used as cover for lawbreaking.”

As discussed in last week’s newsletter, President Biden’s long-expected executive order on AI has set in motion programs and policies across the federal bureaucracy. In a statement to his cabinet in early October, Biden is reported to have asserted that the EO would impact the work of every department and agency. “That’s not hyperbole,” said the President, according to one meeting participant. “The rest of the world is looking to us to lead the way.”

In this week’s newsletter, we discuss two developments that have come in the immediate aftermath of the AI EO’s promulgation: a draft memorandum from the Office of Management and Budget (“OMB”) providing direction to agencies on how to comply with the AI EO, and the signing of the Bletchley Declaration by almost 30 countries and international bodies.

OMB Draft Memorandum on the Implementation of Biden’s AI EO

President Biden’s AI EO assigns agencies across the federal bureaucracy a series of AI-related procedures and policy changes to be implemented over the coming months. On November 1, the White House announced that the OMB is releasing for comment a draft policy providing guidance to agencies on how to implement the EO’s directives. The comment period on the draft memorandum closes on December 5, 2023.

The memo “directs agencies to advance AI governance and innovation while managing risks from the use of AI, particularly those affecting the safety and rights of the public,” and is designed to “address risks specifically arising from the use of AI, as well as governance and innovation issues that are directly tied to agencies’ use of AI.”

Regarding the use of AI by federal agencies, the memo has three primary focuses: strengthening AI governance, advancing responsible AI innovation, and managing risks from the use of AI.

Strengthening Artificial Intelligence Governance

Agencies must designate a Chief AI Officer (“CAIO”) within sixty days of the issuance of the memorandum. The primary responsibilities of each agency’s CAIO include “coordinating their agency’s use of AI, promoting AI innovation, managing risks from the use of AI, and carrying out the agency responsibilities,” as defined in Biden’s AI EO.

Within this same time frame, all 24 agencies identified in the Chief Financial Officer Act (“CFO Act Agencies”) must convene relevant senior officials to coordinate agency AI issues.[1] “CFO Act agencies are required specifically to establish AI Governance Boards to convene relevant senior officials no less than quarterly to govern the agency’s use of AI, including to remove barriers to the use of AI and to manage its associated risks.”

Within 180 days of the issuance of the memorandum and every two years thereafter until 2036, agencies must make publicly available and submit to the OMB either a plan to remain in compliance with the memorandum or a determination that the agency does not plan to use covered AI.

Finally, each agency (excluding the Department of Defense and the Intelligence Community) must annually submit an inventory of the agency’s AI use cases to the OMB and publicly on the agency’s website.

Advancing Responsible Artificial Intelligence Innovation

Within one year of the issuance of the memorandum, each CFO Act agency must release and make publicly accessible an agency-wide strategy for encouraging the responsible use of AI. This strategy should cover topics including the agency’s primary AI use cases, an assessment of the agency’s AI maturity and workforce capacity, and plans for effectively governing agency use of AI.

Agencies are also directed to invest in certain categories of infrastructure to facilitate the responsible adoption of AI technologies. These include:

  1. Information Technology Infrastructure. Agencies should ensure that they have sufficient computing infrastructure, along with adequate access to software tools, to “rapidly develop, test, and maintain AI applications.”
  2. Data. Agencies should develop the infrastructure to curate agency datasets “for use in training, testing, and operating AI.”
  3. Cybersecurity. As necessary, agencies should update cybersecurity processes to address the needs of AI applications.
  4. Workforce. Agencies should fill gaps in AI talent, addressing the need for both technical and non-technical workers “whose contribution and competence with AI are important for successful and responsible AI outcomes.”
  5. Generative AI. Agencies are directed to assess how generative AI may be leveraged “without posing undue risk.”

Managing Risks from the Use of Artificial Intelligence

The draft memorandum outlines two categories of AI use that will be subject to further scrutiny.

  1. Safety-Impacting. AI purposes that are presumed to be safety-impacting include those that manage critical infrastructure, the movement of vehicles, industrial emissions processes, and access to government facilities.
  2. Rights-Impacting. AI purposes that are presumed to be rights-impacting include those that limit the reach of protected speech, are used by law enforcement for the purpose of surveillance or crime forecasting, make clinical diagnoses, and play a role in the loan allocation process.

The draft memorandum would establish certain steps agencies must take prior to utilizing AI in a safety-impacting or rights-impacting fashion. These requirements, effective August 1, 2024, include completing an AI impact assessment, testing the AI for performance in a real-world context, and independently evaluating the AI. While using AI systems in a safety-impacting or rights-impacting fashion, agencies must put in place infrastructure to conduct ongoing reviews of these systems, mitigate emerging risks to rights and safety, ensure that human review is involved in the functioning of these systems, provide public notice documentation of the AI use, and more.

According to the text of the draft memorandum, an interagency council convened by the director of OMB would create a list of recommended documentation that should be required from a selected vendor in order to fulfill a federal AI contract. Agencies would be directed to conduct AI procurement in a manner that is legal, complies with “human and civil rights, and civil liberties,” promotes competition, and ensures that the government retains “sufficient rights to data and any improvements to that data so as to avoid vendor lock-in…”

The Bletchley Declaration and the United Kingdom’s AI Safety Summit

The AI EO was released just hours before Vice President Harris began her scheduled trip to London to attend the AI Safety Summit. The almost 30 countries and international organizations in attendance jointly signed the Bletchley Declaration (“Declaration”), a joint statement on the need for governmental and non-governmental actors to work towards the development of responsible AI.

The Declaration asserts that “AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible.” The signatories resolved “to work together in an inclusive manner to ensure . . . AI that is safe, and supports the good of all through existing international fora and other relevant initiatives, to promote cooperation to address the broad range of risks posed by AI.”

At the AI Safety Summit, Vice President Harris delivered remarks emphasizing the need for AI regulation to address not only the threats that AI poses to humanity but also to individuals. Harris asserted that “all leaders from government, civil society, and the private sector have a moral, ethical, and societal duty to make sure that AI is adopted and advanced in a way that protects the public from potential harm and that ensures that everyone is able to enjoy its benefits.”

Reiterating the familiar theme that AI has the potential to generate profound good and profound harm, Harris argued for a more capacious understanding of the “existential threats” of AI. “When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him? When a woman is threatened by an abusive partner with explicit, deep-fake photographs, is that not existential for her?”

Given this nuance, Harris urged those in attendance to “consider and address the full spectrum of AI risk — threats to humanity as a whole, as well as threats to individuals, communities, to our institutions, and to our most vulnerable populations.” To that end, Harris asserted the Biden administration’s commitment to “working with our partners in Congress to codify future meaningful AI and privacy protections.”

We will continue to monitor, analyze, and issue reports on these developments.

Endnotes


[1] Here is a list compiled by the U.S. Chief Information Officers Council of all twenty-four CFO Act Agencies.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Mintz - Antitrust Viewpoints | Attorney Advertising

Written by:

Mintz - Antitrust Viewpoints
Contact
more
less

Mintz - Antitrust Viewpoints on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide