Biden Administration Signs Executive Order on Artificial Intelligence

McDonnell Boehnen Hulbert & Berghoff LLP
Contact

McDonnell Boehnen Hulbert & Berghoff LLP

On October 30, President Biden signed a sprawling executive order governing the development, testing, and use of artificial intelligence (AI). Formally titled, "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," the order sets forth guiding principles and actions to be taken by federal agencies. These actions include research, coordination with industry, academia, and the international community, promulgation of regulations, publication of reference materials, and establishment of a White House Artificial Intelligence Council to assist the administration with all of the above.

The order does not have the staying power of legislation, as it may effectively expire or be revoked after Biden leaves office. Nonetheless, it touches on a significant number of areas relating to AI and may serve to kick-start a regulatory framework for these advancing technologies.

The motivation behind the order is undoubtedly the recent emergence of generative AI that has been rapidly adopted by businesses and the public. This includes well-known large language models (LLMs) and image generation models, as well as newer and still-evolving video generation and music generation models. Generative AI has proven to be revolutionary compared to traditional machine learning techniques, most of which are focused on accurate classification of information. It is also has been developing very quickly, as the LLMs and image generation models of 2023 are notably better than those of 2022.

This has led to a number of concerns regarding the training and use of generative AI, such as deepfakes, discrimination, job displacement, intellectual property infringement, privacy violations, misinformation, and weapons development, just to name a few. Popular media of the last several decades has mostly emphasized AI's potential for harm rather than its beneficial uses. Consequently, a natural response to these risks is to catastrophize. That said, anyone who is not feeling anxiety of the potential misuses of AI has probably not thought about them thoroughly. On the other hand, AI has great potential to cure diseases, simplify artistic processes, democratize education, offload boring and mundane manual tasks from humans, adapt content for disabled individuals, and advance science and technology across the board.

Thus, the order was probably inevitable and should be viewed as a positive first step. Of course, the devil is in the details and the order is lengthy. The purpose of this article is to introduce the content of the order by focusing on its eight partially-overlapping guiding principles and briefly touch on how they are intended to be applied.

Safety and security: The order states that "[m]eeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use." These risks include those related to "biotechnology, cybersecurity, critical infrastructure, and other national security dangers." The order proposes to establish testing, evaluation, and performance monitoring standards and practices, as well as "effective labeling and content provenance mechanisms, so that Americans are able to determine when content is generated using AI and when it is not."

Promoting responsible innovation, competition, and collaboration: The goal of this principle is to facilitate the United States being a leader in all things AI. "This effort requires investments in AI-related education, training, development, research, and capacity, while simultaneously tackling novel [IP] questions and other problems to protect inventors and creators." The order also emphasizes a need for fairness and competition by "stopping unlawful collusion and addressing risks from dominant firms' use of key assets such as semiconductors, computing power, cloud storage, and data to disadvantage competitors, and . . . supporting a marketplace that harnesses the benefits of AI to provide new opportunities for small businesses, workers, and entrepreneurs."

Support of American workers: The order indicates that the administration "will seek to adapt job training and education to support a diverse workforce and help provide access to opportunities that AI creates." However, "AI should not be deployed in ways that undermine rights, worsen job quality, encourage undue worker surveillance, lessen market competition, introduce new health and safety risks, or cause harmful labor-force disruptions."

Equity and civil rights: The order recognizes that "[AI] systems deployed irresponsibly have reproduced and intensified existing inequities, caused new types of harmful discrimination, and exacerbated online and physical harms." Being able to trust that new AI systems are treating individuals fairly is critical to its growth and adoption. Thus, Biden seeks to "promote robust technical evaluations, careful oversight, engagement with affected communities, and rigorous regulation."

Consumer protection: Continuing on the goal of trust, the order states that the government will "enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI." It notes that these laws and safeguards are particularly relevant to "critical fields like healthcare, financial services, education, housing, law, and transportation" where unregulated use of AI could cause great harm to individuals and society.

Privacy and civil liberties: A further goal is to combat the risk that sensitive "personal data could be exploited and exposed." Here, the order instructs agencies to "use available policy and technical tools, including privacy-enhancing technologies (PETs) where appropriate, to protect privacy and to combat the broader legal and societal risks — including the chilling of First Amendment rights — that result from the improper collection and use of people's data."

The Federal Government's use of AI: The order seeks to "attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines . . . and ease AI professionals' path into the Federal Government to help harness and govern AI." This would also involve training the federal workforce to "understand the benefits, risks, and limitations of AI for their job functions, and . . . ensure that safe and rights-respecting AI is adopted, deployed, and used."

International leadership: Finally, the order points to the importance of working with the international community on issues of AI development as well as safeguards. Thus, the administration intends to "engage with international allies and partners in developing a framework to manage AI's risks, unlock AI's potential for good, and promote common approaches to shared challenges." Particular emphasis is on promoting "responsible AI safety and security principles and actions with other nations."

The meat of the order focuses on initial steps to be taken in these directions, with many including particular requirements and timelines. For example, the order instructs the Secretary of Energy to develop a plan for "tools to evaluate AI capabilities to generate outputs that may represent nuclear, nonproliferation, biological, chemical, critical infrastructure, and energy-security threats or hazards" within 270 days.

A type of AI model that the order specifically addresses is a so-called "dual-use foundation model", defined as "an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety." Currently, generative AI models would likely fall into this category. The order requires that companies developing or planning on developing dual-use foundational models to periodically report to the government on:

• "any ongoing or planned activities related to training, developing, or producing dual-use foundation models, including the physical and cybersecurity protections taken to assure the integrity of that training process against sophisticated threats;"

• "the ownership and possession of the model weights of any dual-use foundation models, and the physical and cybersecurity measures taken to protect those model weights;" and

• "the results of any developed dual-use foundation model's performance in relevant AI red-team testing." To this end, the order also instructs the National Institute of Standards and Technology (NIST) to develop red-team testing standards that evaluate to what extent an AI model can be used for harmful purposes.

Additional reporting requirements include divulging the existence, locations, and ownership of large-scale computing clusters -- arguably, those that can be used to train and execute dual-use foundation models.

Thus, the order requires that the government know what entities are doing, where they are doing it, and how they are doing it with regard to such dual use models. The Biden administration appears to be viewing generative AI like a dangerous munition that cannot be developed under the veil of secrecy by a non-governmental entity.

In the past, Silicon Valley's response to the government attempts to rein it in has been along the lines of "Don't regulate us, bro!" So far, reaction to the order has been largely positive. The major players may have realized that an absence of regulation in this area could ultimately slow AI adoption by businesses and the public due to a lack of trust. If the framework set forth in the order is intelligently implemented, these concerns may be largely assuaged, allowing AI to continue its remarkable growth but with many of its inherent risks understood, limited, or mitigated.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© McDonnell Boehnen Hulbert & Berghoff LLP | Attorney Advertising

Written by:

McDonnell Boehnen Hulbert & Berghoff LLP
Contact
more
less

McDonnell Boehnen Hulbert & Berghoff LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide