How the Federal Government’s AI Risk Management Practices Will Set the Standard: A Closer Look at Government Action Following President Biden’s Executive Order on AI

Seyfarth Shaw LLP
Contact

Seyfarth Synopsis: Following President Biden’s comprehensive Executive Order on AI, the White House announced the formation of the “US AI Safety Institute” within the Commerce Department’s technology arm, the NIST. The Institute has been directed to develop technical guidance used by regulators, such as the EEOC, considering rulemaking and enforcement on discrimination related to AI. The White House has also released for public comment draft guidance relating to the federal government’s use of AI. These standards contain an expansive scope of AI applications in the employment space that are considered presumptively “rights impacting” and thus require certain government agencies to conduct an impact assessment and other minimum risk-management practices. Critically, these definitions and practices are likely to be held out as practices that should also be adopted by private-sector employers.

On Monday, October 30, President Biden signed a comprehensive Executive Order addressing AI regulation across a wide range of industries and issues. Our prior management alert discussed how the EO set forth President Biden’s vision for America to continue leading in AI innovation while also addressing risks associated with the use of AI, and highlighted provisions in the EO we identified as particularly relevant to employers using AI.

Now just a few days later, there already are further developments with significant implications for employers paying attention to enforcement and litigation efforts in this area.

1. The new US AI Safety Institute within NIST

On November 1, the White House announced the formation of the “US AI Safety Institute” within the National Institute of Standards and Technology. NIST. According to the White House, the new AI Safety Institute will develop “technical guidance that will be used by regulators considering rulemaking and enforcement on issues such as … identifying and mitigating against harmful algorithmic discrimination”.

Put another way, the tech gurus at NIST will be helping enforcement agencies such as the EEOC in their enforcement efforts on AI. Thus, the EEOC will be able to tap the technical expertise of some of the federal government’s leading experts on AI risk as the EEOC attempts to scale up its AI enforcement efforts.

We anticipate that the new NIST group will be working towards guidance and other documents that directly address the use of AI-powered employment screening tools. NIST’s core function, as its name implies, centers around crafting technical standards. President Biden’s Executive Order directs NIST to “establish guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems,” as well as “launching an initiative to create guidance and benchmarks for evaluating and auditing AI capabilities”. While The October 30 Executive Order emphasized that broad mandate, the recent unveiling of the AI Safety Institute on Wednesday and its emphasis of technical assistance leading to enforcement highlights the Biden Administration’s attention to the government’s role as the enforcer of existing civil-rights laws.

We believe that even if the EEOC does not immediately adopt the NIST group’s recommendations formally as enforcement guidance or mandatory requirements for employers, it may still endorse them as practices that AI developers and deployers should aspire to follow.

2. Implications for Employers of the Draft Guidance on the Federal Government’s Use of AI

Also on November 1, the White House’s Office of Management and Budget (OMB) issued draft guidance to the federal government regarding the government’s own use of AI. (The White House’s fact sheet is a good summary.) Public comments are being accepted through December 5. We expect prompt issuance of the final memo, due to the significant momentum present.

In our summary of President Biden’s Executive Order from Monday, we predicted that the way the Federal government thinks about AI risk will influence the way private companies think about AI risk. Employers should pay particular attention to how these government-wide AI risk management efforts will influence the EEOC’s thinking on AI risk and risk management, as well as how they may shape the EEOC’s own use of AI.

OMB’s draft guidance purports to speak solely to the federal government’s own use of AI, and disclaims that it applies to federal agencies’ own regulatory efforts. However, past experience suggests that the federal government will ultimately decide that the AI risk management “best practices” it applies to itself should also be adopted by private-sector AI deployers. Moreover, federal agencies will be purchasing many types of AI systems from private-sector developers, and so the government’s own purchasing requirements will influence the development of systems that are sold to both the government and private industry.

In its draft guidance to the federal government, the White House is essentially requiring federal agencies to take certain minimum risk-management steps for “safety-impacting AI” and “rights-impacting AI”. Importantly for employers using AI in hiring, the draft OMB guidance has a very broad definition of what employment-related AI applications are presumed to be “rights impacting” and thus subject to the memo’s minimum risk management processes. Its definition of “rights impacting” applications includes those related to:

G. Determining the terms and conditions of employment, including pre-employment screening, pay or promotion, performance management, hiring or termination, time-on-task tracking, virtual or augmented reality workplace training programs, or electronic workplace surveillance and management systems;

Under the draft OMB memo, agencies cannot use “rights impacting” systems after August 1, 2024 without first taking these steps:

  1. Complete an impact assessment.
  2. Test the AI for performance “in a real world context”.
  3. “Independently evaluate the AI”.

To continue using such systems, the agency must on an ongoing basis

4. Conduct ongoing monitoring and establish thresholds for periodic human review.

5. Mitigate emerging risks to rights and safety.

6. Ensure adequate human training and assessment.

7. Provide appropriate human consideration as part of decisions that pose a high risk to rights or safety.

8. Provide public notice and plain-language documentation through the AI use case inventory.

The “impact assessment” requirement could vary from a simple review to a complex evaluation.

Importantly, the OMB memo’s discussion of impact assessments includes the directive that, as part of the impact assessment, agencies should assess the quality and appropriateness of relevant data, including the data the AI was trained on. This data assessment is in addition to requiring agencies to assess disparate impact. Specifically, the OMB memo requires agencies to ensure that training data “is adequately representative of the communities who will be affected by the AI, and has been reviewed for improper bias based on the historical and societal context of the data”. We note that this process-focused mandate to use adequately diverse data sources goes beyond the results-focused analysis required by Uniform Guidelines on Employee Selection Procedures of 1978, the standards discussed in the EEOC’s technical assistance issued in May 2023 about the applicability of Title VII to AI.

Employers using AI should note that the minimum requirements set forth by OMB’s draft memo also go beyond the pre-deployment bias audit requirements required by New York City’s Local Law 144, whose enforcement began in July 2023. (Additionally, the scope of employment-related AI applications presumed by OMB’s draft guidance to be “rights affecting” extends well beyond the New York City law’s narrow definition of “automated employment decision tool”.)

We’re already seeing movement on one of these points. In her remarks earlier in October 2023, EEOC Chair Burrows emphasized the need for AI to be trained on diverse data sources.

3. President Biden’s Executive Order emphasizes enforcement efforts

We expect that EEOC leadership will want to be an active participant in the meeting that will be convened by the Department of Justice by the end of January 2024, in which the heads of Federal civil rights offices will meet “to discuss comprehensive use of their respective authorities” to address potential civil-rights harms arising out of the use of AI. President Biden’s Executive Order specifically mentions the potential harm arising out of “issues related to AI and algorithmic discrimination” and directs the civil rights offices and independent agencies, like the EEOC, to increase their coordination on issues related to AI and algorithmic discrimination.

President Biden’s Executive Order also directs the agencies to increase their outreach efforts to external stakeholders, in order “to promote public awareness of potential discriminatory uses and effects of AI”. Increased federal government outreach efforts in this realm are highly likely to encourage workers who feel they have experienced unlawful discrimination to file a charge with the EEOC, who will investigate it.

With respect to the EEOC, President Biden’s directives align with the EEOC’s ongoing emphasis on its enforcement priorities. In April 2023, EEOC Chair Charlotte Burrows joined the heads of three other federal agencies in a press release touting the agencies’ “commitment to enforce their respective laws and regulations to promote responsible innovation in automated systems.” Additionally, in May 2023, all EEOC personnel were requested by EEOC Chair Charlotte Burrows to attend an AI training about how front-line staff could “identify AI-related issues in [their] enforcement work”. And as discussed previously on the Workplace Class Action Blog, in August 2023 the EEOC entered into a settlement with the iTutor Group overa lawsuit that many have called the EEOC’s “first-ever” artificial intelligence discrimination in hiring lawsuit, even though the underlying technology being used simply asked job applicants for their date of birth and was configured to automatically reject female applicants age 55 or older and male applicants age 60 or older. (To be clear, automatically rejecting older job applicants, when their birthdates are already known, does not require any sort of artificial intelligence or machine learning.)

And in September 2023, the EEOC’s new Democratic majority approved the EEOC’s Strategic Enforcement Plan in a 3-2 party-line vote. The approved SEP instructs EEOC personnel and clarifies for the public the issues the commission will prioritize in its enforcement, outreach and other efforts. The first listed priority in the SEP is a focus on discriminatory recruitment and hiring practices, with a particular emphasis on the use of technology, AI, and machine learning used in job advertisements, recruiting, and hiring decisions.

4. Upcoming guidance from OFCCP about Federal Contractors’ Use of AI

President Biden’s Executive Order directed that within a year, the Secretary of Labor shall publish guidance to federal contractors regarding “nondiscrimination in hiring involving AI and other technology-based hiring systems.”

Employers not directly subject to OFCCP’s audits should still pay close attention to what the Department of Labor does here, because there is always the potential for the Department of Labor’s guidance to be held out as a broader standard that employers everywhere might incorporate in their practices, and enforcement standards articulated by the Department of Labor against federal contractors are likely to be used by the EEOC in its own enforcement efforts.

Implications for Employers

President Biden’s Executive Order on AI is a far-reaching order that tasks multiple federal agencies with seeking to harness the power of AI, both for the government and the American economy as a whole, while also working to manage risks associated with AI. There is a strong enforcement component to actions set in motion by the Executive Order, and employers should expect the EEOC’s activity in this area to continue to increase. Moreover, employers should be prepared for heightened coordination on AI issues between the EEOC and its peer civil-rights enforcement agencies, as well as with NIST on technical issues and standards. Finally, the ways that the federal government thinks about assessing AI risk, establishing minimum AI risk management practices, and implementing various AI risk management frameworks will likely be held out as exemplary practices that the private sector will be encouraged to adopt.

We will continue to monitor these developing issues, and Seyfarth plans to provide additional updates as we dive more deeply into President Biden’s comprehensive EO on AI and the actions it set in motion.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Seyfarth Shaw LLP | Attorney Advertising

Written by:

Seyfarth Shaw LLP
Contact
more
less

Seyfarth Shaw LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide