President Biden’s Executive Order on Artificial Intelligence Provides Important Initial Guidelines in Governmental Regulation for Safe, Secure, and Trustworthy Development and Use of the Advancing Technology

Benesch
Contact

Benesch

With growing concerns related to how artificial intelligence is developed, regulated, and implemented in the United States, this Executive Order represents possibly the first of several governmental actions to address this ever-evolving technology.

On October 30th, President Joe Biden issued the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (the “Executive Order”) creating an early yet important set of guardrails to balance the need for cutting-edge technology with national security and peoples’ rights.

Two days later, the Office of Management and Budget (“OMB”) published a draft memorandum—available for public comment through December 5th—containing additional guidance on managing the risks involved in and mandating accountability for advancing artificial intelligence (“AI”) technology.

The Executive Order and OMB memorandum represent the Executive Branch’s significant acknowledgment of the need to establish accountability and guardrails in how AI is developed without denying the importance of technological innovations and the companies that drive them.

Because of the Executive Order and OMB memorandum’s recommendations broad nature, the effect of any subsequent regulations will likely impact organizations from all sectors of the economy.

For instance, the Executive Order’s definition of AI is not limited to generative AI or systems leveraging neural networks. It follows the broad definition set forth in 15 U.S.C. § 9401(3), “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”

According to a White House fact sheet, “[t]he Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more.”

New Standards for AI Safety and Security

The Executive Order provides that in accordance with the Defense Production Act, companies developing any AI posing a serious risk to national security or safety must (1) notify the federal government when training the AI and (2) share the results of all “red-team” safety tests.

Further, the Executive Order instructs the National Institute of Standards and Technology (“NIST”) to set up rigorous standards for the extensive read-team testing as part of ensuring safety before the AI is publicly released. The Department of Homeland Security will then use NIST’s standards to establish an AI Safety and Security Board.

Additionally, the Department of Commerce is expected to develop guidance for content authentication and watermarking to protect Americans from AI-enabled fraudulent and deceptive practices. The goal is to “make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”

Protecting Americans’ Privacy

Included in the Executive Order are different calls on Congress to pass bipartisan data privacy legislation. This includes prioritizing federal support for acceleration in developing and using privacy-preserving techniques, such as models for training AI while preserving privacy in the training data.

The Executive Order encourages evaluation of how agencies are collecting and using commercially available information—including information obtained from data brokers—to strengthen privacy guidance, focusing “in particular on commercially available information containing personally identifiable information.”

Advancing Equity and Civil Rights

President Biden’s Executive Order strengthens the movement to “ensure that AI advances equity and civil rights” by directing the government to (1) provide clear guidance to those using AI on how to keep them from being used in discriminatory practices, (2) coordinate with the Department of Justice and Federal civil rights offices on addressing algorithmic discrimination, and (3) develop best practices for use of AI in the criminal justice context.

Potential discrimination in use of AI has been a long-standing concern, especially in the employment, healthcare, and housing. For example, on May 18th, the Equal Employment Opportunity Commission explained that its 1978 Uniform Guidelines on Employee Selection Procedures applies to use of AI in decision-making.

Standing Up for Consumers, Patients, and Students

While AI is expected to bring real benefits to consumers, AI also increases the risk of injury, misleading information, or harm. As a result, the Executive Order encourages the Department of Health and Human Services establish a safety program to review reports and act to remedy unsafe AI practices. Further, guidelines and resources for educators using AI-enabled tools in schools are encouraged.

Supporting Workers

The White House fact sheet explains “AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement.”

In combating these concerns, President Biden’s Executive Order calls for the development of principals and best practices addressing potential job displacement, labor standards, data collection, and more to prevent employers from using AI to negatively impact workers.

Promoting Innovation and Competition

As part of the push to keep innovation open and competition from becoming stifled in the marketplace, the Executive Order encourages (1) catalyzing AI research through a pilot of the National AI Research Resource, (2) expanding research grants for AI in vital areas, (3) promoting Federal Trade Commission enforcement and regulatory activities of AI use, and (4) using existing authorities to increase expertise in AI in the United States.

Conclusion and Takeaways

As of now, the directives in the Executive Order are limited to programs administered by federal agencies, requirements for AI used by the federal government, national security and critical infrastructure concerns, and potential rulemakings covering federally regulated entities.

Implementation of the Executive Order is expected to be a lengthy process, as the various deadlines for the directives contained therein span from the end of November 2023 to early 2025. And since these deadlines are nonbinding, there is no guarantee that federal agencies will stay on schedule.

Nonetheless, as Congress continues to study the policy implications raised by advancing AI and considers whether to enact legislation in this area, this Executive Order, the OMB memorandum, and any actions that follow will act as cornerstones for effective implementation and safe regulation of AI in the United States.

Determining the extent to which the Executive Order and OMB memorandum impacts a specific organization will require careful assessment of the organization’s use of AI.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Benesch | Attorney Advertising

Written by:

Benesch
Contact
more
less

Benesch on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide