Key Principles From EEOCs Latest Guidance on Employers Use of AI Tools

Cooley LLP

Recently, the US Equal Employment Opportunity Commission (EEOC) made clear that it intends to make discrimination caused by artificial intelligence (AI) tools an enforcement priority over the next four years. This enforcement priority follows the EEOC’s 2021 announcement of its Artificial Intelligence and Algorithmic Fairness Initiative, which launched an agencywide effort to ensure AI tools and other emerging technologies used in making employment decisions comply with the federal civil rights laws that the EEOC enforces.

In keeping with the agency’s new focus on this issue, on May 18, 2023, the EEOC issued a “technical assistance document” that provides employers using AI in employment decisions with guidance on compliance considerations under Title VII. This recent guidance follows the agency’s technical assistance document on AI and the Americans with Disabilities Act issued last May. While neither guidance document has the force of law, they represent warnings from an agency that has focused on employers’ use of automated systems.

Although Title VII applies to all employment practices, the scope of the recent guidance is limited to employers’ use of “algorithmic decision-making tools” (i.e., types of software or applications that incorporate a set of instructions intended to accomplish a defined end goal) in selection procedures – including hiring, promotion and firing, and the potential for such use to have adverse or disparate impacts on the basis of the race, color, religion, sex or national origin of persons. As the guidance makes clear, employers and software vendors using AI to help, develop or implement algorithmic decision-making tools need to analyze these tools to confirm their use is not adversely impacting any group protected under Title VII. Below are key principles for employers.

1. A wide variety of AI tools are subject to EEOC scrutiny

The agency identified various examples of algorithmic decision-making tools that may incorporate AI (i.e., AI tools) and result in disparate impacts that trigger Title VII violations. These AI tools – which implement algorithmic decision-making at different stages of the employment process – include:

  • Résumé scanners that prioritize certain keywords.
  • Employee-monitoring software that rates employees on the basis of keystrokes or other factors.
  • Virtual assistants or chatbots.
  • Video interviewing software.
  • Testing software that provides “job fit” scores for applicants regarding their personalities, aptitudes, cognitive skills or perceived cultural fit.

2. Selection procedures using AI tools must be job-related and consistent with business necessity

While the focus of the guidance is on AI tools, the agency recognizes that the algorithmic decision-making tools discussed may not actually rely on AI to accomplish the defined goal. As such, the guidance is directed at all algorithmic decision-making tools – not just those that employ AI. The agency makes clear that employers’ use of algorithmic decision-making tools can constitute selection procedures subject to the EEOC’s 1978 Uniform Guidelines on Employee Selection Procedures (which provides guidance for employers in determining whether their tests and selection procedures are Title VII-compliant) when the tools are used to make or inform decisions about whether to hire, promote, terminate or take similar actions.

Thus, the guidance confirms the agency’s expectation that employers assess whether their selection procedures incorporating the use of such tools have an adverse impact on a particular group and, if the use is not job-related and consistent with business necessity, take appropriate remedial measures. Further, the guidance indicates that even if an employer can demonstrate that a selection procedure is job-related and consistent with business necessity, they should still assess other less discriminatory alternatives available that would be “comparably as effective,” but not disproportionately exclude individuals of protected classes.

3. The ‘four-fifths rule’ is a general rule of thumb but is not dispositive in all circumstances

The EEOC also noted that the “four-fifths rule” – a rule that states a selection rate for one group is “substantially” different from the selection rate of another group if the ratio between the two rates is less than four-fifths (or 80%) – can continue to be used as a “general rule of thumb” to assess whether a selection process could have a disparate impact. However, the agency clarified that it may not be appropriate in all circumstances. For example, relying on the rule may be inappropriate where an employer’s actions have discouraged individuals from applying disproportionately on grounds of a protected characteristic, or where it is not a reasonable substitute for a test of statistical significance. To that end, the agency recommends that employers ask software vendors whether they relied on the four-fifths rule or on a standard such as statistical significance, where applicable.

4. Employers are responsible for use of algorithmic decision-making tools

The agency’s guidance makes clear that employers cannot rely on the representations of outside vendors or developers of AI tools regarding any disparate impact assessments, as employers can be held responsible for the actions of vendors who act on their behalf. The guidance notes, “if the vendor is incorrect about its own assessment and the tool does result in either disparate impact discrimination or disparate treatment discrimination, the employer could still be liable.” The agency recommends that employers ask vendors if steps have been taken to evaluate whether the tool’s use causes a substantially lower selection rate for individuals with a protected characteristic.

5. Employers should proactively conduct self-analyses of AI tools for discrimination issues

The guidance concludes with the recommendation that employers conduct ongoing self-analyses of their AI tools for potential discrimination issues. The EEOC recommends that employers who discover a tool would have an adverse impact take steps to reduce the impact, or select a different tool to avoid potential Title VII violations.

Next steps

The EEOC’s latest guidance follows the agency’s recent pledge with other federal agencies against discrimination and bias in automated systems, as noted in this May 2023 Cooley alert on New York City and automated employment decision tools. Along with increasing federal focus, employers using AI in employment processes should be on the lookout for developments regulating the use of these tools from the state and local levels, including in New York City. 

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Cooley LLP | Attorney Advertising

Written by:

Cooley LLP
Contact
more
less

Cooley LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide