White House announces AI companies' voluntary commitments to address AI-related risks

Hogan Lovells
Contact

Hogan Lovells

The White House announced new voluntary commitments from seven leading Artificial Intelligence (AI) companies to ensure safe, secure, and transparent AI technology development, advancing the Administration’s promise to manage the risks posed by AI in an effort to protect Americans’ rights and safety. The commitments include (1) ensuring products are safe before introducing them to the public, (2) building systems that put security first, and (3) earning the public’s trust.


On July 21, 2023, the White House announced that it secured voluntary commitments from seven leading Artificial Intelligence (AI) companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – to ensure safe, secure, and transparent AI technology development. The Administration stated in a press release that these commitments to safety, security, and trust mark a critical step toward developing responsible AI.

These seven AI companies have pledged to:

  1. Ensure Products are Safe Before Introducing Them to the Public
    1. The companies commit to internal and external security testing of their AI systems before release.  Independent experts will conduct testing to mitigate AI risks, including biosecurity, cybersecurity, and societal effects.
    2. The companies commit to sharing information on managing AI risks across the industry and with governments, civil society, and academia.  This includes best practices for safety, information on attempts to circumvent safeguards, and technical collaboration.
  2. Build Systems that Put Security First
    1. The companies commit to investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights.  Model weights are crucial for AI systems, and the companies agree that they should only release them when intended and when the companies have considered security risks.
    2. The companies commit to facilitating third-party discovery and reporting of vulnerabilities in their AI systems.  AI system issues may persist even after release, but a robust reporting mechanism enables quick identification and resolution.
  3. Earn the Public’s Trust
    1. The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.  This will enable creativity while reducing the dangers of fraud and deception.
    2. The companies commit to publicly reporting their AI systems’ capabilities, limitations, and appropriate and inappropriate use areas.  This report will cover security and societal risks, like effects on fairness and bias.
    3. The companies commit to prioritizing research on the societal risks that AI systems can pose, including avoiding harmful bias and discrimination and protecting privacy. 
    4. The companies commit to developing and deploying advanced AI systems to help address society’s greatest challenges.  

This announcement comes amid related White House and other federal agency efforts on responsible AI. The White House also noted that it is currently developing an executive order and pursuing bipartisan legislation to further responsible innovation.

With many thanks to summer associate Jordyn Johnson for her valuable contributions to this publication.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Hogan Lovells | Attorney Advertising

Written by:

Hogan Lovells
Contact
more
less

Hogan Lovells on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide