Decoding the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

BakerHostetler
Contact

BakerHostetler

On Oct. 30, the Biden administration took a decisive step into the future by issuing the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This executive order is aimed at addressing the challenges posed by artificial intelligence (AI) technologies. Drawing its authority from the Defense Production Act, a legislation historically utilized to commandeer or regulate private industry for the purpose of bolstering national defense, this directive signals a paradigm shift in the government’s approach to the burgeoning AI landscape.

Embracing an “all of government” strategy emblematic of the Biden administration, the directive leverages agencies and offices spanning the entire administration to grapple with the multifaceted aspects of AI technologies within their respective domains, such as consumer protection and intellectual property (IP). The policy outlines a series of actions to be undertaken in the near and medium term, reflecting a nuanced understanding of the intricate interplay between AI advancements and the varied sectors they impact.

At its core, this initiative mirrors the Biden administration’s overarching vision — positioning the United States as a global leader in AI policy. Fueled by a desire to stay ahead of the curve and maintain a competitive edge, the directive is a proactive response to the evolving international landscape where other major players (such as the European Union, the United Kingdom and China) are also making strides in regulating and shaping the trajectory of AI development.

Central to this executive order are eight foundational principles that underpin the administration’s approach to AI policy: safety and security, innovation, labor and employment, equity and civil rights, consumer protection, privacy, federal government procurement, and leadership. These principles serve as a compass, guiding the government’s initiatives and actions in this arena.

  • Safety and Security

The first principle requires that AI must be safe and secure and seeks to achieve this through policies, institutions and mechanisms, as well as through the robust, reliable, repeatable and standardized testing of AI systems.

Policies, Institutions and Mechanisms. The National Institute of Standards and Technology (NIST) is tasked with establishing guidelines and best practices for safe, secure and trustworthy AI systems. Similarly, the Department of Energy is directed to create evaluation tools for AI models focused on critical domains, including nuclear technology, nonproliferation measures and energy security. Federal agencies supervising critical infrastructure are tasked with assessing AI-related risks and vulnerabilities. Agencies overseeing vital sectors, such as the financial industry, are required to issue reports on best practices to mitigate AI-specific security risks for regulated entities. The possibility of a National AI Risk Management Framework for critical infrastructure owners is also under consideration, in accordance with NIST’s existing AI Risk Management Framework.

Testing. NIST has also been directed to provide guidelines for AI “red-teaming” tests, or a “structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI.” This red-teaming guidance aims to uncover flaws and vulnerabilities in AI systems, often in collaboration with developers. Shortly after the executive order was released, the Department of Commerce published a press release noting that the Bureau of Industry and Security will invoke the Defense Production Act to require companies that have developed dual-use foundation models to report to the Department of Commerce the results of their red-teaming. Dual-use foundation models are AI models that use AI models that use computing power greater than the 1026 integer or floating-point operations, which are the basic math operations carried out by any machines to execute any calculation and that leverage a single set of machines connected through data center networking of over 100 gigabits per second — a compute load very close to the current compute load used by models like OpenAI’s GPT-4.

Know Your Customer (KYC). The Department of Commerce is directed to establish requirements for companies engaged in or indicating an intention to develop potential dual-use foundation models to report certain information and activities to the federal government. This will include specific information about large-scale computing clusters, ensuring transparency and accountability in the evolving landscape of AI research and development.

Recognizing the potential risks associated with AI models, the Department of Commerce is also directed to propose regulations requiring infrastructure-as-a-service providers to notify the government when foreign entities engage in training large AI models. Prohibitions on non-U.S. companies reselling services without adequate reporting further underscore the emphasis on cybersecurity safeguards through a KYC-like regime.

  • Innovation

The second principle states that the United States should promote responsible innovation, competition and collaboration via investments in education, training, research and development, and capacity while addressing IP rights questions and stopping unlawful collusion and monopoly over key assets and technologies.

Regulatory Guidance. Recognizing the evolving landscape at the nexus of AI and IP, the United States Patent and Trademark Office (USPTO) is set to publish guidance for patent examiners and applicants, specifically addressing issues related to inventorship and the incorporation of AI. The USPTO and the Copyright Office will also offer recommendations to the president on potential executive actions concerning copyright and AI. This includes the scope of protection for works generated using AI and the treatment of copyrighted materials in AI training processes.

The Departments of Homeland Security and Justice are also tasked with developing comprehensive training, guidance and resources to combat and mitigate AI-related IP risks. This includes addressing challenges such as AI-related IP theft and other risks.

Similarly, the executive order encourages the Federal Trade Commission (FTC) as an independent agency to leverage its authorities, including rulemaking powers under the Federal Trade Commission Act. This aims to regulate unfair and deceptive practices, ensuring robust protection for both consumers and workers in the dynamic and evolving AI-driven marketplace.

Attracting Talent. Recognizing the pivotal role of talent in advancing AI capabilities, the directive directs the secretaries of state and homeland security to streamline the visa process. This initiative is designed to attract and retain AI talent, facilitating an infusion of expertise into the federal government. The establishment of an AI and technology task force further accelerates the hiring of AI talent across federal agencies, complemented by the implementation of AI training and familiarization programs.

Initiatives. The directive envisions a transformative landscape with the creation of a pilot program for the National AI Research Resource, a National Science Foundation Regional Innovation Engine and four National AI Research Institutes. These initiatives signify a concerted effort to bolster AI research, innovation and collaboration, fostering a dynamic ecosystem that propels the nation to the forefront of AI advancements.

  • Labor and Employment

The third principle states that the responsible development and use of AI requires a commitment to supporting American workers though education and job training and understanding the impact of AI on the labor force and workers’ rights. The executive order also recognizes that the labor market is being affected by AI, with risks from workplace bias, surveillance and job displacement, and calls for the development of principles and best practices to benefit workers, including the production of a report on AI’s potential labor market effects.

  • Equity and Civil Rights

The fourth principle states that AI policies must be consistent with the advancement of equity and civil rights. The executive order states that policies should expand on steps already taken, such as the Blueprint for an AI Bill of Rights, the AI Risk Management Framework and Executive Order 14091 from February 16, which specifically calls for racial equity and support for underserved communities through the federal government.

  • Consumer Protection

The fifth principle requires that consumers who increasingly use, interact with or purchase AI and AI-enabled products in their daily lives must be protected. The Department of Health and Human Services is tasked with instituting a safety initiative aimed at gathering data and mitigating harms or unsafe practices in healthcare involving AI. Moreover, independent agencies, including the FTC and Consumer Financial Protection Bureau, are encouraged to enforce existing consumer protection laws and principles concerning AI. There is also a call to introduce new safeguards to shield consumers from AI-related risks, such as fraud, bias, discrimination, privacy breaches and safety hazards, with a heightened focus on critical sectors such as healthcare, financial services, education, housing, law and transportation.

  • Privacy

The sixth principle indicates that privacy and civil liberties must be protected by ensuring that the collection, use and retention of data is lawful and secure and promotes privacy. To this effect, the Office of Management and Budget, in consultation with the Federal Privacy Council and the Federal Interagency Council on Statistical Policy, is directed to evaluate agency standards and procedures regarding the collection, processing, storage and dissemination of commercially available information that contains personal information. The executive order also seeks federal support to develop and implement measures to strengthen privacy-preserving research and technologies and create guidelines that agencies can use to evaluate privacy techniques used in AI. Once more, independent regulatory agencies are called to use existing authorities to protect American consumers from threats to privacy – a call that the FTC has repeatedly made clear it is ready to answer, having already issued multiple guidance documents on the use of AI and algorithms. Note that this principle may also extend to companies’ selection and oversight of vendors of AI technology through the executive order’s call for regulators to “clarif[y] the responsibility of regulated entities to conduct due diligence on and monitor any third-party AI services they use.”

  • Federal Government Procurement

The seventh and penultimate principle relates to the importance of risk management regarding federal government procurement of AI and the federal government’s internal capacity to regulate, govern and support the responsible use of AI. First and foremost, the executive order states that each agency under the executive branch is to designate a chief AI officer to coordinate its use of AI. Agencies are also tasked with facilitating the government-wide acquisition of AI services and products, to seek additional AI talent and to provide AI training for employees at all levels in relevant fields. Additionally, the executive order directs agencies to remove unnecessary barriers to the responsible use of AI, including barriers related to inadequate IT infrastructure, datasets, cybersecurity authorization practices and AI talent.

In line with these ideas, the Office of Management and Budget has already released draft guidance on strengthening AI governance, advancing responsible AI innovation and managing risks from the use of AI by agencies. The executive order also sets forth the grounds for a model policy statement on generative AI:

“As generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI. Agencies should instead limit access, as necessary, to specific generative AI services based on specific risk assessments; establish guidelines and limitations on the appropriate use of generative AI; and, with appropriate safeguards in place, provide their personnel and programs with access to secure and reliable generative AI capabilities, at least for the purposes of experimentation and routine tasks that carry a low risk of impacting Americans’ right. To protect Federal Government information, agencies are also encouraged to employ risk-management practices, such as training their staff on proper use, protection, dissemination, and disposition of Federal information; negotiating appropriate terms of service with vendors; implementing measures designed to ensure compliance with record-keeping, cybersecurity, confidentiality, privacy, and data protection requirements; and deploying other measures to prevent misuse of Federal Government information in generative AI.”

  • Leadership

The eighth and final principle states that the federal government should lead the way to global societal, economic and technological progress, including by engaging with international partners to develop a framework to manage AI risks, unlock AI’s potential for good and promote a common approach to shared challenges.

The federal government is set to champion responsible AI safety and security principles in collaboration with other nations, including even those traditionally considered competitors. This approach involves spearheading global conversations and collaborations to ensure that the benefits of AI extend to the entire world. The goal is to prevent AI from exacerbating existing inequities, threaten human rights or cause other forms of harm on a global scale.

In sum, the Biden administration is not only positioning itself as a frontrunner in the technological development of AI through the first seven principles, but it is also taking proactive steps toward shaping international regulations with the eighth principle. This dual commitment underscores a comprehensive approach to AI governance that spans both domestic and global considerations.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© BakerHostetler | Attorney Advertising

Written by:

BakerHostetler
Contact
more
less

BakerHostetler on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide