Foreshadowing Biden’s AI Executive Order? — AI: The Washington Report

Mintz
Contact

Mintz

[co-author: Raj Gambhir]

Welcome to this week's issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.

The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates have exponentially increased the federal government’s interest in AI and its implications. In these weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities.

This issue covers Biden’s straws in the wind regarding the much-anticipated, forthcoming executive order on AI. Our key takeaways are:

  1. Excerpts from a previously unreleased State Department letter criticizing certain provisions of the European Union’s (“EU”) AI Act suggest discomfort in some quarters of the Biden administration with the regulatory approach to AI taken by the EU.
  2. The EU’s AI Act adopts a “risk-based approach” under which different uses of AI are regulated on the basis of their “risk to the health and safety or fundamental rights of natural persons.” Certain uses are banned, while others are subject to close scrutiny.
  3. The State Department letter, along with comments provided by sources familiar with the drafting of the forthcoming executive order, indicate that the executive order will rely on voluntaristic guidelines on testing and evaluating AI systems. This method of AI regulation accords with the approach taken by the Biden administration over the last few months. According to these same sources, the executive order will also address national security and workforce development concerns associated with AI. 

Biden’s Forthcoming AI Executive Order

On July 21, 2023, the White House announced that President Biden will sign an executive order “to help America lead the way in responsible innovation” of AI. It is expected that this executive order will be promulgated in the coming months. In late September, President Biden commented, “This fall, I’m going to take executive action, and my administration is going to continue to work with bipartisan legislation so America leads the way toward responsible AI innovation.”

The imminent promulgation of this executive order raises an important question: to what extent will the Biden administration seek to follow the model set by the European Union’s AI Act, or will the administration encourage the development of a distinctly American mode of AI regulation? A previously unreleased document from the State Department suggests that the Executive Order may lay out a path to encourage the development of AI regulation that differs from the EU’s standard.

The European Union’s AI Act

Over the course of this year, Congress has spent a lot of energy on the subject of AI. Dozens of piecemeal measures have been introduced since January. Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) have gone a step further, releasing a framework for a comprehensive AI standard. Senator Schumer has convened the inaugural AI Insight Forum to encourage the development of comprehensive AI legislation.

As Congress turned its attention to AI, the European Union has raced ahead. In April 2021, the European Commission (“EC”) proposed the Artificial Intelligence Act. Earlier this year, members of the European Parliament adopted negotiating positions on the AI Act, paving the way for discussions regarding a final version of the law. A final version of the European AI Act is expected by the end of the year, although any such final version may not go into full effect until 2025 or 2026.

As discussed in a previous newsletter, the AI Act adopts a “risk-based approach” under which different uses of AI are regulated on the basis of their “risk to the health and safety or fundamental rights of natural persons.” Certain uses, such as “AI-based social scoring for general purposes done by public authorities,” are banned, while “specific restrictions and safeguards” are applied to “high-risk AI systems.” Specified AI systems are subject to transparency obligations, including those used to categorize individuals on the basis of biometric data.

The Biden Administration’s “Qualified Disagreement” with the European Union’s Approach

Analysts have generally interpreted the Biden administration’s position on the EU’s AI Act as one of qualified disagreement.

In October 2022, a European news outlet reported on a document drafted by US government officials and sent to the European Commission, which argued that the AI Act is overly broad in its regulatory prescriptions. The document asserted that the regulation’s definition of AI “still includes systems that are not sophisticated enough to merit special attention under AI-focused legislation, such as hand-crafted rules-based systems.”

A year on, another document critical of the EU AI Act has become public. On October 6, 2023, Bloomberg reported on a State Department letter criticizing the EU’s AI Act.

The document warned that as drafted, the AI Act risks “dampening the expected boost to productivity and potentially leading to a migration of jobs and investment to other markets.” The act’s requirements would curtail “investment in AI R&D and commercialization in the EU, limiting the competitiveness of European firms,” according to the document’s drafters. The document also suggested line-by-line revisions to certain provisions of the AI Act.

Glimpses of Biden’s AI Executive Order

The State Department letter, along with other critiques of the AI Act coming out of the Biden administration, hint that Biden’s forthcoming executive order will encourage the creation of a regulatory framework for AI that differs in key respects from the emerging European model.

This “insight” is supported by hints coming from the White House. Certain sources have reported that the forthcoming executive order will rely on the framework of securing commitment to voluntary guidelines on testing and evaluating AI systems. Absent action from Congress, the Biden administration has relied heavily on this model of voluntaristic AI regulation. As discussed in previous newsletters, the Biden administration developed a Blueprint for an AI Bill of Rights and secured commitments on AI safety from a group of top technology companies.

According to these same sources, the executive order will also focus on two issues of great concern to Congress and the general public: AI’s impact on national security and on the workforce. To protect against the possibility of malign actors gaining access to powerful AI models, the executive order is expected to require cloud computing firms to track the entities developing certain AI models on their systems. Additionally, the executive order is expected to implement measures to facilitate AI education, training, and talent recruitment.

Regardless of the exact form that the executive order ultimately takes, it is likely to differ substantially in its approach from that of the AI Act due to both limitations on executive power and the Biden administration’s apparent skepticism regarding the EU’s approach towards AI regulation.

The executive order is expected to be released within the coming weeks, possibly as soon as late October 2023. We will continue to monitor, analyze, and issue reports on these developments.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Mintz | Attorney Advertising

Written by:

Mintz
Contact
more
less

Mintz on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide