The U.S. position in the race for global AI leadership received a substantial boost following adoption of broad new directives focused on artificial intelligence (AI) in Divisions A and E of the recently enacted National Defense Authorization Act. Specifically, the Act mandates the creation of several new federal offices to oversee the development and implementation of a national AI strategy and for these offices to coordinate with stakeholders to achieve the goals set forth therein.
Quickly fulfilling an initial mandate, the White House Office of Science and Technology Policy (OSTP) on January 12, 2021, announced the launch of the National AI Initiative Office pursuant to the Act. The National AI Initiative Office will oversee and implement a national AI strategy and serve as the central hub for coordination and collaboration by federal agencies and outside stakeholders in AI research and policymaking.
Further, the OSTP has re-chartered the Select Committee on Artificial Intelligence and expanded its mission to serve as the senior interagency body to oversee the AI initiative, as referenced in the Act.
National Defense Authorization Act
The Act incorporates several AI-focused proposals and advances the previous initiatives in the Trump Administration to ensure that the Biden White House continues to support concerted federal efforts to develop and implement AI. Specifically, the statute focuses on supporting AI developments in several ways, primarily by:
- Launching the National Artificial Intelligence Office Initiative and establishing several committees and task forces dedicated to advising on and developing the next wave of AI technologies;
- Directing the Department of Defense (DOD) to:
- (1) Assess its ability to ensure that the AI it acquires is "ethically and responsibly developed;" and
- (2) To establish a Steering Committee to develop the strategy for developing and accessing emerging technologies, including AI, in order to maintain "the technological superiority of the United States military;"
- Expanding the role of the National Institute of Standards and Technology (NIST) in advancing AI research; and
- Vesting DOD with the authority to procure AI and develop an assessment for acquisition in line with the "ethically and responsibly developed" threshold.
Collectively and individually, these mandates represent a significant focus on ensuring the federal sector continues to drive and support the development of AI in conjunction with a wide range of actors in the private sector. Notably, the goals and federal actors in question vary, as can be seen in the overviews we provide below, but reflect an expansion of a federal framework for supporting, enabling and (likely) overseeing AI development.
National Artificial Intelligence Initiative Launch and Committee Actions
The Act establishes the National Artificial Intelligence Initiative Office, which will be located within OSTP. This office will oversee and implement national AI strategy focused on research and policymaking and will serve as the central hub for coordination and collaboration between federal agencies and outside stakeholders.
The Act also establishes the National Artificial Intelligence Advisory Committee and sets out the Interagency Committee coordination process. This committee will include a wide swath of actors from the federal, non-profit, academic, and private sectors.
Further, the National Science Foundation (NSF) is directed to partner with the National Research Council of the National Academies of Sciences, Engineering, and Medicine to oversee an AI impact study on the American workforce across all sectors that could be affected by AI adoption. This study must be presented to Congress by January 1, 2023.
NSF must also partner with OSTP to establish the National AI Research Resource Task Force, which is tasked with assessing how the federal government can sustainably operate a central resource for national AI research. The codification of these various federal offices and committees and their immediate goals emphasizes the federal sector's focus on continuing active development in AI and supporting collaboration between federal agencies and the diverse private actors necessary for technological advances.
NIST Developing Risk Management Framework for Use in Implementing AI
The Act sets out a multi-layered approach for NIST to expand its already significant role of leading a collaborative effort to develop a risk management framework for developing and using AI. In particular, the Act tasks NIST with several responsibilities:
- Advancing collaborative frameworks, standards, guidelines, and associated methods and techniques for AI;
- Supporting the development of a risk-mitigation framework for deploying AI systems within two years;
- Supporting the development of technical standards and guidelines that promote trustworthy AI systems; and
- Supporting the development of technical standards and guidelines by which to test for bias.
Here, as in the mandate to launch the National Artificial Intelligence Initiative, the Act emphasizes facilitating active collaboration between the federal and private sectors and ensuring the creation of ethical, reliable AI development models. These new duties will expand upon the work that NIST has already undertaken on key issues such as bias, explainability, and standards.
DOD AI Procurement and Development Assessment
Finally, the Act also vests DOD with the authority to acquire AI tech for national defense and establishes procedures to ensure that DOD acquires AI technologies that are "ethically and responsibly developed." The Act also directs DOD to conduct an assessment to determine whether DOD is able to ensure it can meet that standard, which is due to Congress a scant 180 days from passage of the Act.
Formation of that standard may be subject to some tension between DOD's statutory mission to acquire "ethically and responsibly developed" AI and the mandate that DOD also develop a strategy for investing and developing AI that is "needed to maintain the technological superiority of the United States Military."
This potential for conflict may be lessened by the lack of a concrete benchmark for "ethically and responsibly developed" AI. Instead, the Act leaves this determination up to personnel "with sufficient expertise, across multiple disciplines (including ethical, legal, and technical expertise)," which makes the benchmark something of an open question that is largely dependent on the specific people tapped to serve as experts. To that end, the Act grants DOD the authority to identify the in-house and outside experts to whom it will delegate this determination.
Expanding Federal AI Administrative State
Read together, the OSTP's action and the Act reflect dedicated, continued investment by the federal government. The Act is also striking for the degree to which it repeatedly focuses on a collaborative approach between agencies, as well as between the federal sector, the private sector, and the non-public sector.
In particular, these actions illustrate the government's intent to ensure that the federal sector continues to take action throughout the transition to ensure continued AI development and timely establish some degree of oversight of nascent AI tech with a healthy degree of input from industry experts.