Executive Order 14410: An Artificial Intelligence Odyssey

McCarter & English Blog: Government Contracts & Export Controls

What do you think is going to be scarier—artificial intelligence (AI) or the government’s effort to regulate AI? On October 30, 2023, the White House issued Executive Order (E.O.) 14410, Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. As the federal government’s latest foray into harnessing AI, this E.O.—like those before it, generally—recognizes that AI offers extraordinary potential and promise, provided that it is harnessed responsibly to prevent the exacerbation of societal harms. Since E.O. 14410, there has been a flurry of activity in the federal government, including guidance and policies providing an indication of how agencies can/should/will harness AI to support agency objectives. While we are far from a situation similar to Skynet from the Terminator franchise or HAL 9000 from 2001: A Space Odyssey, the government’s accelerated activity to reap AI’s potential benefits far outpaces the provision of actionable guidance so contractors can understand and adapt to what will be required in offering AI products and services to the government. So let’s open the pod bay doors and explore…

E.O. 14410 sets forth the Biden administration’s policy to advance and govern the development and use of AI in the federal government based on eight guiding principles and priorities. These principles, set forth in Section 2 of the E.O., direct executive departments and agencies to develop and implement AI that is safe and secure while promoting innovation, competition, and collaboration, and to manage risk and protect the privacy and civil liberties of the American people.

Section 4 of the E.O. directs the National Institute of Standards and Technology (NIST), in coordination with the Department of Energy and Department of Homeland Security (DHS), to develop, by August 2024, guidelines and best practices for the development and deployment of safe, secure, and trustworthy AI systems. The E.O. also directs NIST to develop companion resources to the AI Risk Management Framework, NIST AI 100-1, for generative AI (i.e., AI models that take in data in order to generate derived synthetic content, or “deepfakes”) and NIST’s Secure Software Development Framework to incorporate secure development practices for generative AI and dual-use foundation models. NIST AI 100-1 provides a framework for the responsible development and use of AI systems. The framework consists of four core functions—govern, map, measure, and manage—with additional categories and subcategories under each function to guide an organization’s development and deployment of AI applications to minimize AI’s potential risks to people, organizations, and ecosystems. The need for companion resources for the development of dual-use foundation models (AI models that can be easily modified to exhibit high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters) is a recognition of the multifaceted uses—and risks—AI applications present in their deployment.

“My program will not allow me to act against an officer of this company.”

Robocop

The E.O. also directs multiple agencies to issue guidance and policies on the safe and responsible development of AI in the federal government. Section 4.3 directs the director of the Cybersecurity and Infrastructure Security Agency to assess the potential risks related to the use of AI in critical infrastructure sectors and ways to mitigate such vulnerabilities within 90 days of the E.O. and annually thereafter. Section 10 of the E.O. directs the Office of Management and Budget (OMB) to issue guidance to agencies regarding the use of AI in the federal government, including requiring that each agency designate a chief AI officer.

“I’ve got to warn you, it’s a learning robot. Every moment you spend fighting it only increases its knowledge of how to beat you.”

Mirage, The Incredibles

To promote competition, Section 5.3 of the E.O. directs the head of each agency developing policies and regulations related to AI to take steps that address risks arising from the concentrated control of key inputs, to take steps to stop unlawful collusion, and to prevent dominant firms from disadvantaging competitors. Recognizing the important role small businesses play in the country’s economy, Section 5.3 directs the Administrator of the Small Business Administration to establish Small Business AI and Commercialization Institutes that provide support, technical assistance, and other resources to small businesses seeking to innovate, commercialize, scale, or otherwise advance the development of AI.

Pursuant to Section 10 of the E.O., OMB on November 3, 2023, issued a Draft Policy, Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. The Draft Policy establishes new agency requirements in areas of AI governance, innovation, and risk management. Agencies are directed to designate a chief AI officer who would be primarily responsible for the agency’s implementation of AI, promoting AI innovation, and managing risks from the use of AI. The position of chief AI officer must be at a level appropriate to regularly engage with other agency leadership, including the agency’s deputy secretary or equivalent.

“You’re in trouble, program. Make it easy on yourself. Who’s your User?”

Master Control Program, Tron

To advance responsible AI innovation, the Draft Policy directs agencies to create internal environments that foster innovation, paying special attention to IT infrastructure, data, and cybersecurity. The Draft Policy also advises agencies that OMB will issue forthcoming guidance on managing AI risk, including what documentation will be required from a contractor selected in the fulfillment of a federal AI contract. Further, the Draft Policy directs that agency development and deployment of AI align with the nation’s values and laws, promote competition and avoid contractor entrenchment, and ensure implementation of adequate testing and safeguards in contracts for generative AI and dual-use foundational models.

Issued prior to E.O. 14410, DHS’s Policy Statement 139-06, Acquisition and Use of Artificial Intelligence and Machine Learning Technologies by DHS Components, provides an informative starting point on what to expect from agency AI procurements. Issued on August 8, 2023, the Policy Statement directs all DHS Components to establish policies and practices governing the acquisition and use of AI and machine learning technology, including taking into consideration the following:

  • Testing and validating AI employed where discriminatory activity or effects are possible
  • Updating and developing additional security requirements, as needed, to protect AI technologies against novel cybersecurity threats and risks introduced by the application of these technologies
  • Neither collecting, using, or disseminating data used in AI activities nor establishing AI-enabled systems that make or support decisions based on inappropriate considerations or biases

Under law the Quest for Ultimate Truth is quite clearly the inalienable prerogative of your working thinkers. Any bloody machine goes and actually finds it and we’re straight out of a job, aren’t we?

—The Hitchhiker’s Guide to the Galaxy

Shortly after the release of E.O. 14410, the Department of Defense (DoD) released its Data, Analytics, and Artificial Intelligence Adoption Strategy (the 2023 AI Adoption Strategy). Building on the DoD’s first AI Strategy in 2018 and the DoD’s revised Data Strategy in 2020, the 2023 AI Adoption Strategy continues the DoD’s digital transformation, emphasizing the need to leverage high-quality data, advanced analytics, and AI to enable rapid, well-informed decisions. The 2023 AI Adoption Strategy continues the DoD’s focus on utilizing commercial solutions that “will ensure the [DoD’s] capability pipelines address evolving requirements.” To accomplish this, the 2023 AI Adoption Strategy emphasizes six principles to govern the DoD’s strategy, focusing on improving the quality of the data, advanced data analytics, and AI ecosystems as well as strengthening governance, while removing barriers to AI adoption and implementation.

The federal government’s venture into AI recognizes the cybersecurity concerns AI introduces. Current guidance acknowledges that AI is not a siloed technology but touches on other areas of information technology. AI requires the development of software code, and it is possible the government will require a companion AI Bill of Materials from contractors, complementing the Software Bill of Materials (SBOM) requirement in Section 4 of E.O. 14028, Improving the Nation’s Cybersecurity (in October 2023, the Federal Acquisition Regulatory Council published a Proposed Rule that would require contractors to develop and maintain an SBOM on any software used in the performance of a contract). Cybersecurity is very much a factor in any deployment of AI in the federal government. However, specific actionable guidance is still in the works.

Perhaps recognizing that cloud computing’s ability to scale resources complements the need for large data sets to train certain AI models, the Department of Commerce (Commerce), as directed by Section 4.2 of E.O. 14410, issued a Proposed Rule on January 29, 2024, that would require US Infrastructure-as-a-Service (IaaS) providers to submit reports to Commerce when a foreign person transacts with a US IaaS provider to train a large AI model with the potential capability to be used in malicious cyber-enabled activity. With comments open until April 29, 2024, the Proposed Rule would make it necessary for US IaaS providers to establish a customer identification program (CIP) and to require a similar program from their foreign resellers. The Proposed Rule requires that the CIP collect certain minimum information when an account is opened by or on behalf of a foreign person/entity, that records be retained for two years, and that the CIP be certified annually. The Proposed Rule also proposes new civil and criminal penalties.

“I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I’ve still got the greatest enthusiasm and confidence in the mission.”

—HAL 9000, 2001: A Space Odyssey

In the past six months, a flurry of activity has been directed toward the development and deployment of AI within the federal government. However, as noted in the DoD’s 2023 AI Adoption Strategy, data quality directly impacts the potential an AI application offers. Further AI deployment presents additional cybersecurity risks and requires additional mitigation measures. While OMB’s Draft Policy directs agencies to develop and deploy AI in a safe and secure manner, guidance and guidelines on its safe and secure development are still works in progress. Contractors should be cognizant of the shifting landscape and recognize that there is an increasing convergence of security requirements, which are no longer isolated to a particular field or area of technology but are becoming increasingly interconnected. Although the future has not been written, hopefully the federal government’s deployment of AI will unleash its promised potential rather than “I’m sorry, Dave. I’m afraid I can’t do that.”

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© McCarter & English Blog: Government Contracts & Export Controls | Attorney Advertising

Written by:

McCarter & English Blog: Government Contracts & Export Controls
Contact
more
less

McCarter & English Blog: Government Contracts & Export Controls on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide