President Biden Issues Executive Order on AI Technology

Saul Ewing LLP
Contact

Saul Ewing LLP

On Monday, October 30, President Biden issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”[1] (the “Order”) in an attempt to seize the promise and manage the risks of artificial intelligence (“AI”) technology. The Order establishes new standards for AI safety and security because this technology has the potential to exacerbate societal harms such as fraud, discrimination, bias, and disinformation. To accomplish the administration’s goal, the Order directs federal agencies to develop principles and best practices around AI, which is evolving at a meteoric pace. The Order acknowledges that it is only a first step and calls on Congress to push forward with comprehensive federal privacy legislation that addresses the risks of AI technology.

What You Need to Know:

  • On October 30, 2023, President Biden issued the Executive Order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
  • The Executive Order establishes new standards for AI safety and security because this technology has the potential to exacerbate societal harms such as fraud, discrimination, bias, and disinformation.
  • The Biden Administration clearly sees AI as a revolutionary tool that needs to be embraced but regulated.
  • The Executive Order was issued with the widespread public use of Generative AI, and follows on the heels of the Biden Administration’s Blueprint for an AI Bill of Rights and while the EU is finalizing the language of the EU AI Act. 

Below is a summary of key directives included in the Order: 

  • The Order directs the Secretary of Labor, along with other agencies and outside entities, to develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers. These principles and best practices are to address: job displacement; labor standards; workplace equity, health, and safety; and data collection. It seeks to prevent undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.
  • In an effort to promote competition, the Order encourages the Federal Trade Commission to consider using its existing powers to ensure fair competition in the AI marketplace and to protect consumers and workers from harms that may be enabled by the use of AI. 
  • In accordance with the Defense Production Act, the Order requires companies developing any foundation model[2] that poses a serious risk to national security, national economic security, or national public health and safety to notify the federal government when training the model, and the company must share the safety results with the government.
  • The Order directs the National Institute of Standards and Technology (NIST) to develop standards for extensive red-team testing.[3] The Department of Homeland Security is directed to establish the AI Safety and Security Board to implement those standards to critical infrastructure sectors. The Departments of Energy and Homeland Security will jointly address the threats imposed on critical infrastructure by AI technology.
  • The Department of Commerce is directed to protect consumers from AI-embedded fraud and deception by developing guidance for content authentication and watermarking to clearly label AI-generated content. These tools will be used by federal agencies to “set an example for the private sector and governments around the world.”
  • The Order directs the State Department, in collaboration with the Commerce Department, to establish international frameworks to harness AI’s benefits, manage the risks posed by this technology, while ensuring safety.
  • The Order directs the National Security Council and the White House Chief of Staff to develop a National Security Memorandum to ensure the United States military and intelligence community use AI safely, ethically, and effectively in their missions, as well as direct actions to counter adversaries’ military use of AI.
  • The Order directs increased coordination between the Department of Justice’s Civil Rights Division and federal civil rights offices to address algorithmic discrimination and to ensure fairness throughout the criminal justice system by developing best practices and guidelines.
  • The Order creates the White House AI Council to coordinate the activities of agencies across the federal government to ensure the effective formulation and implementation of AI-related policies, including those set forth in the Order.

The Order’s release comes less than a year after Generative AI (e.g., ChatGPT) gained wide public adoption, and follows on the heels of the Biden Administration’s Blueprint for an AI Bill of Rights. The White House’s actions, plus the EU’s efforts to finalize its comprehensive AI Act, are indicative of the significant implications AI technology poses. The Biden Administration clearly sees AI technology as a revolutionary tool that needs to be embraced but regulated. The Order expressly encourages federal agencies to incorporate AI technology (including within the US military) and is calling to accelerate the rapid hiring of AI professionals as part of a government-wide AI talent surge. Although the Order is directed towards federal agencies, the federal government will undoubtedly influence the development of this emerging technology as one of the largest purchasers of technology.

Looking Forward

The Order calls on federal agencies to develop guidelines, standards and best practices, therefore it is imperative for employers to stay alert and up to date to any new guidance and regulation aimed to protect workers, such as the EEOC’s May 2023 guidance about preventing discrimination when an employer uses automated systems, including those that incorporate AI.


[1]  https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
[2] Foundation models are generalized AI models trained on large quantities of unlabeled data that can perform a wide range of tasks, such as natural language processing or audio generation. For example, GPT-3.5 and GPT-4 are foundation models behind ChatGPT. 
[3] Red-team testing is a thorough assessment of an organization’s security measures and ability to respond to threats. 

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Saul Ewing LLP | Attorney Advertising

Written by:

Saul Ewing LLP
Contact
more
less

Saul Ewing LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide