President Biden's AI Executive Order: What Private Sector Organizations Need to Consider

Baker Donelson
Contact

Baker Donelson

On October 30, 2023, President Biden signed an Executive Order as a part of the administration's continued efforts to regulate the development and use of artificial intelligence (AI)-based technologies. Although many components of the Order focus on the federal government’s response to AI-related issues, there are also important items for private sector organizations to consider.

The Biden-Harris administration has been actively engaged on AI-related issues, establishing the Blueprint for an AI Bill of Rights, gaining voluntary commitments from technology companies regarding the development of AI-based technologies, and now issuing a wide-ranging Executive Order that will impact both federal agencies and private businesses. The Executive Order outlines numerous measures intended to enhance safety and security, privacy protections, equity and civil rights, protections for consumers and workers, and innovation and competition, among other topics. However, in both the text of the Executive Order and the press statements, the White House acknowledges that more action will be required.

Throughout the Executive Order, it is also clear that the administration is still in an information gathering stage. The administration has tasked numerous agencies with developing reports that will be used to inform future legislative and regulatory efforts. However, these reports will likely also provide valuable insights to private sector businesses as they consider best practices and assess the potential scope of future legal requirements.

In addition to activities that are specific to various federal agencies and their use and development of AI-based technologies, the Executive Order requires the following activities over the course of the next several months, which will likely be relevant to many private sector businesses:

Technology-Specific Initiatives. The Executive Order outlines several initiatives regarding AI-based technologies. The reporting obligations imposed via these efforts and the forthcoming guidance will be important for businesses across industry sectors.

  • Guidelines, Standards, and Best Practices for AI Safety and Security. The National Institute of Standards and Technology (NIST) and other governmental agencies have been charged with creating guidelines and best practices for developing and deploying safe, secure, and trustworthy AI systems. This guidance will include: (1) a companion to the AI Risk Management Framework for generative AI; (2) a companion to the Secure Software Development Framework to incorporate secure development practices for generative AI and for dual-use foundation models; and (3) the launch of an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities. In addition, NIST has also been tasked with establishing guidelines for AI developers to use when conducting AI red-teaming tests (i.e., testing efforts to find flaws and vulnerabilities in an AI system). These will be key resources for businesses across all industries.
  • Dual-Use Foundation Models. Companies developing dual-use foundation models (i.e., those that are trained on broad data, generally use self-supervision, contain at least tens of billions of parameters, are applicable across a wide range of contexts, and pose a serious risk to national security, national economic security, or national public health and safety) must provide the federal government with ongoing reports regarding: (1) activities related to training, developing, or producing the model; (2) the ownership and possession of the model weights; and (3) the results of any red-team testing.
  • Computing Clusters Reporting. Companies that acquire, develop, or possess a large-scale computing cluster must report any such acquisition, development, or possession. Until the Secretary of Commerce finalizes the specific reporting requirements, the following are subject to the interim reporting requirement: (1) any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and (2) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI.
  • Synthetic Content. The Secretary of Commerce has also been instructed to prepare a report identifying the current standards, tools, methods, and practices, as well as potential science-backed standards and techniques for: (1) authenticating content and tracking its provenance; (2) labeling synthetic content (e.g., watermarking); (3) detecting synthetic content; (4) preventing generative AI from producing child sexual abuse material and nonconsensual intimate imagery; (5) testing software used for these purposes; and (6) auditing and maintaining synthetic content. In addition, the Secretary of Commerce will be developing guidance on these points as well as best practices for digital content authentication and synthetic content detection. Based on this guidance, the Office of Management and Budget (OMB) will issue guidance for federal agencies as to how they should label and authenticate synthetic content that they produce or publish. Because of the potential risks for fraud and other harm presented by synthetic content generated by AI-based tools, the forthcoming guidance on these points may also be helpful for private sector companies as they consider these issues.
  • Dual-Use Foundation Models with Widely Available Weights. The Secretary of Commerce will also be soliciting input from various stakeholders and preparing a report regarding the potential risks and benefits, as well as appropriate policy and regulatory approaches, related to dual-use foundation models for which the model weights are widely available.

Industry-Specific Efforts. In addition to addressing certain technologies, the Executive Order also includes requirements for specific industry sectors. These efforts will generate important resources for stakeholders in those industries. Several of these initiatives are discussed below:

  • Critical Infrastructure. Homeland Security has been instructed to incorporate the AI Risk Management Framework and other security guidance into the safety and security guidelines used by critical infrastructure owners and operators. Moreover, the Secretary of Homeland Security will also be establishing an Artificial Intelligence Safety and Security Board to serve as an advisory committee to address how that agency can improve security, resilience, and incident response related to AI usage in critical infrastructure.
  • Financial Institutions. The Secretary of the Treasury will be issuing a public report regarding best practices for financial institutions to manage AI-specific cybersecurity risks.
  • Health Care. The Executive Order contemplates an extensive list of initiatives in health care:
    • The U.S. Department of Health and Human Services (HHS) has been instructed to prioritize grantmaking and other awards to support responsible AI development and use, including: (1) collaborating with private sector companies on the advancement of AI-enabled tools that develop personalized immune-response profiles for patients; (2) prioritizing 2024 Leading Edge Acceleration Project cooperative agreement awards for projects that improve health care data quality in order to support the development of AI-based technologies for clinical care; and (3) accelerating grants awarded through the National Institutes of Health Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) program.
    • The VA has been instructed to host two nationwide AI Tech Sprint competitions and to provide participants with access to technical assistance, mentorship, feedback on products under development, potential contract opportunities, and other resources.
    • HHS has been tasked with developing a strategic plan that includes policies regarding the deployment and use of AI-based technologies. This plan will address the development, maintenance, and use of predictive and generative AI; incorporation of equity principles; incorporation of safety, privacy, and security standards; and contemplation of the ways in which AI may be used to improve workplace efficiency.
    • HHS has been tasked with developing a strategy to determine if AI maintains appropriate levels of quality in the health care space. As a part of this effort, HHS will develop an AI assurance policy to evaluate the performance of AI-based health care technologies.
    • HHS has also instructed to establish an AI safety program that includes a protocol for identifying clinical errors arising from AI usage within health care settings.
    • Finally, HHS is also developing a plan for how it will regulate the use of AI in drug development.
  • Education. The Secretary of Education has been instructed to develop resources, policies, and guidance regarding AI. These resources will address safe, responsible, and nondiscriminatory uses of AI in education and include an "AI toolkit" for education leaders. The "AI toolkit" will include recommendations from the Department of Education's AI and the Future of Teaching and Learning report, including with respect to human review of AI decisions, the design AI technologies in the educational context, and the development of education-specific guardrails.
  • Communications. The Federal Communications Commission is required to review how AI can improve spectrum management; to coordinate with the National Telecommunications and Information Administration to create opportunities for sharing spectrum between federal and non-federal spectrum operations; to support efforts for improving network security, resiliency, and interoperability using AI-based technologies; and to encourage efforts to combat unwanted robocalls and robotexts that are facilitated by AI-based technologies, including the use of AI-based technologies that block such calls and texts.
  • Transportation. The Nontraditional and Emerging Transportation Technology (NETT) Council has been tasked with reviewing the need for information, technical assistance, and guidance regarding the use of AI in transportation. In addition, the Advanced Research Projects Agency-Infrastructure (ARPA-I) will also explore transportation-related opportunities and challenges with respect to AI.
  • Housing. The Executive Order tasks the Director of the Federal Housing Finance Agency, the Consumer Financial Protection Bureau, and the Secretary of Housing and Urban Development with addressing the use of AI in underwriting, valuation, appraisal, tenant screening, and the advertising of housing, credit, and other real estate-related transactions through digital platforms. The Architectural and Transportation Barriers Compliance Board has also been instructed to issue recommendations regarding the use of biometric data in AI-based technologies, particularly as it relates to people with disabilities.
  • Energy. The Secretary of Energy has been instructed to: (1) issue a report describing the potential for AI to improve electric grid infrastructure; (2) develop tools that facilitate building foundation models, including models that streamline permitting and environmental reviews; (3) collaborate with private sector businesses and academia to support development of AI to mitigate climate change risks; (4) expand partnerships to utilize the Department of Energy's computing capabilities and AI testbeds to build foundation models that support new applications in science and energy and for national security; and (5) establish an office to coordinate development of AI and other critical and emerging technologies.
  • Small Business Administration. The Small Business Administration (SBA) has been tasked with allocating Regional Innovation Cluster funding for clusters that support planning related to one or more Small Business AI Innovation and Commercialization Institutes. This includes allocating up to $2 million in Growth Accelerator Fund Competition bonus prize funds for accelerators that support incorporating or expanding AI-related resources within their programming. The SBA will also conduct outreach and raise awareness about opportunities for small businesses to use these capital-access programs regarding AI.
  • Defense. Finally, the administration has also initiated several efforts focused on defense:
    • Cyber Defense. The Secretary of Defense and the Secretary of Homeland Security will begin working on a pilot project to identify, develop, test, evaluate, and deploy AI to discover and remediate vulnerabilities in critical U.S. government software, systems, and networks.
    • CBRN Threats. The Executive Order empowers the Secretary of Homeland Security to evaluate the potential for AI to be used in the development of chemical, biological, radiological, or nuclear (CBRN) threats for the sole purpose of guarding against those threats.
    • Biosecurity Risks. The Secretary of Defense has been instructed to contract with the National Academies of Sciences, Engineering, and Medicine to conduct a study regarding: (1) the ways in which AI can increase biosecurity risks and how to mitigate those risks; (2) the national security implications of using government data for training generative AI models; (3) the ways in which AI can be used to reduce biosecurity risks; and (4) other opportunities and concerns at the intersection of AI and synthetic biology.
    • Synthetic Nucleic Acids. The Office of Science and Technology Policy (OSTP) is charged with creating a framework to encourage providers of synthetic nucleic acid sequences to implement comprehensive synthetic nucleic acid procurement screening mechanisms. OSTP will also establish criteria for identifying biological sequences that pose a national security risk. OSTP will work with industry participants to develop: (1) specifications for nucleic acid synthesis procurement screening; (2) best practices for managing sequence-of-concern databases; (3) technical implementation guidelines for screening; and (4) conformity-assessment best practices. All agencies that fund life-sciences research will require that synthetic nucleic acid procurement is conducted through providers or manufacturers that adhere to the framework.

Legal Considerations. The Executive Order also acknowledges that, although the use and development of AI-based technologies is already subject to existing legal frameworks, certain areas are ripe for additional guidance. In particular, the Executive Order focuses on the key considerations regarding intellectual property, privacy, and consumer protection.

  • Intellectual Property. The United States Patent and Trademark Office has been instructed to issue guidance to patent examiners and applicants regarding inventorship and the use of AI in the inventive process and other considerations at the intersection of AI and IP. In addition, following the publication of a forthcoming study regarding copyright issues raised by AI, the United States Copyright Office will provide a report to the administration regarding recommendations on potential executive actions relating to copyright and AI. These additional efforts will further clarify these issues for businesses across industry sectors.
  • Privacy. The Executive Order also acknowledges the potential privacy risks presented by AI-based technologies, including the ability to exploit personal data. In an effort to address some of those risks, OMB has been charged with mitigating privacy risks by: (1) evaluating and identifying commercially available information (CAI) procured by agencies; and (2) evaluating the federal agency standards and procedures associated with the collection, processing, maintenance, use, sharing, dissemination, and disposition of CAI that contains personally identifiable information. OMB will also issue guidelines for federal agencies to evaluate the effectiveness of privacy-preserving techniques. Although these efforts are focused on federal agencies, they may provide a roadmap for future legislative efforts, as the administration calls on Congress to pass federal data privacy legislation.
  • Consumer Protection. The Executive Order also instructs independent regulatory agencies to protect consumers from fraud, discrimination, and privacy threats and to address other risks that can arise from the use of AI. The Executive Order specifically identifies risks to financial stability and acknowledges that agencies may need to emphasize or clarify where existing regulations and guidance apply to AI.

Employers and Workforce Development. The Executive Order also includes measures intended to both promote talent development initiatives and protect workforce members from the potentially harmful effects of AI-based technologies. Although many of these efforts are focused on federal agencies, they will impact private sector businesses as they attempt to hire AI professionals, participate in federal contracts, and navigate existing nondiscrimination requirements.

  • Workforce Considerations. The Secretary of Labor has been instructed to develop best practices for employers, which would be used to mitigate AI's potential harm to employees' well-being. This guidance will include specific steps for employers to take with regard to AI, including labor standards and advising workers about AI-related data collection and use. The Secretary of Labor will also be issuing guidance acknowledging that those employers who use AI-based technologies to monitor or augment employees' work must still comply with laws regarding compensation, including the Fair Labor Standards Act of 1938. This guidance will be critical for all employers who utilize these technologies.
  • Nondiscrimination. The Executive Order also focuses on mitigating the potential discrimination and inequity that can be presented by AI-based technologies. Among other things, the Executive Order requires the following:
    • The Attorney General has been instructed to work with agencies as they implement and enforce existing laws to address civil rights violations and discrimination related to AI and to provide guidance, technical assistance, and training to investigators and prosecutors on best practices regarding civil rights violations and discrimination related to AI.
    • Agencies have been instructed to prevent unlawful discrimination resulting from the use of AI in government programs and benefits administration.
    • HHS is publishing a plan regarding the use of automated and algorithmic systems with respect to public benefits and services administered by HHS.
    • The Secretary of Agriculture is also required to issue guidance regarding the use of automated or algorithmic systems in implementing benefits or in providing customer support for benefit programs administered by it.
  • Federal Contractors. The Secretary of Labor will be publishing guidance for federal contractors regarding their nondiscrimination obligations with respect to hiring involving AI and other technology-based hiring systems.
  • Innovation. The Executive Order establishes various pilot programs intended to enhance research and development. The National Science Foundation has been instructed to launch a pilot program to implement the National AI Research Resource. This program will develop the infrastructure, governance, and user interfaces necessary to distribute computational, data, model, and training resources to the research community for AI-related research and development. In addition, the Secretary of Energy will establish a pilot program to enhance existing training programs for scientists, with a goal of training 500 new researchers by 2025.
  • Talent Development. The Executive Order takes several steps to enhance the talent base in the U.S. By way of example:
    • The Secretary of State has been tasked with establishing a program to: (1) identify and attract top talent at universities, research institutions, and the private sector overseas; (2) establish connections; and (3) educate them on opportunities and resources for research and employment in the U.S.
    • The Secretary of State and the Secretary of Homeland Security are charged with streamlining processing times for visa petitions and applications for noncitizens who want to travel to the U.S. to work on, study, or research AI, and to facilitate the availability of visa appointments for applicants with expertise in AI.
    • The Secretary of State has been instructed to: (1) consider rulemaking regarding new criteria to designate countries and skills on the Exchange Visitor Skills List as it relates to the two-year foreign residence requirement for certain J-1 nonimmigrants; (2) update the 2009 Revised Exchange Visitor Skills List; (3) consider implementing a domestic visa renewal program to facilitate the ability of qualified applicants, including highly skilled talent in AI and critical and emerging technologies; (4) consider rulemaking to expand the categories of nonimmigrants who qualify for the domestic visa renewal program to include academic J-1 research scholars and F-1 students in science, technology, engineering, and mathematics; (5) clarify and modernize immigration pathways for experts in AI and other critical and emerging technologies (including O-1A and EB-1 noncitizens of extraordinary ability, EB-2 advanced-degree holders and noncitizens of exceptional ability, and startup founders in AI and other critical and emerging technologies using the International Entrepreneur Rule); (6) continue to modernize the H-1B program, including to expand usage by experts in AI and other critical and emerging technologies; and (7) consider rulemaking to enhance the process for noncitizens, including experts in AI and other critical and emerging technologies and their spouses, dependents, and children, to adjust their status to lawful permanent resident.
    • The Secretary of Homeland Security will be developing resources to attract and retain experts in AI, including a guide that will help individuals understand the options for working in the U.S.

Federal Government's Use of AI. As may have been expected, the Executive Order also addresses how the federal government may use AI-based technologies. To help develop these practices, the Director of OMB has been instructed to create an interagency council, which will coordinate the development and use of AI across agencies and issue guidance regarding the use of AI in the federal government. The Executive Order also highlights the federal government's intention to identify, develop, and recruit AI talent, with the stated purpose of accelerating the placement of personnel in high-priority areas and advancing federal agencies' data and technology strategies. The federal government is even establishing an interagency working group of human resources and recruiting professionals to facilitate the hiring of candidates with AI and other technical skills.

International Relationships. Finally, the Executive Order also signals the administration's intent to advance our allies' understanding of the U.S.' existing and planned AI-related guidance and policies and to enhance international collaboration on these issues. The U.S. intends to lead the efforts to: (1) establish an international framework for managing the risks and benefits of AI, which the Executive Order suggests may be based upon the voluntary commitments made by key technology companies; and (2) develop common regulatory, accountability, and risk management principles. In addition, the Secretary of Commerce has been instructed to lead an effort with international allies and partners as well as standards development organizations to develop and implement AI-related standards and information sharing. These standards are to include AI nomenclature and terminology; best practices regarding data capture, processing, protection, privacy, and analysis; trustworthiness, verification, and assurance of AI technologies; and AI risk management. It will be important for all companies to monitor these standards when they develop AI technologies, as they may, ultimately, set industry expectations.

As can be seen from the foregoing, the considerations regarding the development and use of AI-based technologies continue to evolve, and there will be many developments over the next year based on the federal government's required activities. Organizations should be closely monitoring the forthcoming guidance and efforts required by the Executive Order as well as those requirements that are currently in effect. 

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Baker Donelson | Attorney Advertising

Written by:

Baker Donelson
Contact
more
less

Baker Donelson on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide