EEOC Issues New Guidance on Use of Artificial Intelligence in Employment Selection Procedures

Harris Beach Murtha PLLC
Contact

The document—entitled “Assessing Adverse Impact in Software, Algorithms, under Title VII of the Civil Rights Act of 1964” – defines “artificial intelligence” as a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”

In the employment selection context, such increasingly common AI tools may include resume screening software, employee monitoring software, virtual assistants and video interviewing software that evaluates a candidate’s facial expressions and speech patterns. Notably, this particular EEOC guidance is focused solely on the potential disparate or adverse impact on Title VII-protected categories resulting from the use of facially neutral AI tools, i.e., it does not address issues of intentional discrimination via the use of AI.

To assist an employer in deciding whether their AI tests and selection procedures impact adversely on a protected category, the document relies on the Uniform Selection Guidelines on Employee Selection Procedures (the “Guidelines”), a set of guidelines that were developed to determine adverse impact several decades ago, and confirms that the Guidelines apply equally to AI-based selection tools. Although the scope of the EEOC’s guidance is limited, it does include the following key points for employers:

  • If the use of a selection tool causes a selection rate for individuals within a protected category that is substantially lower (less than 80%, i.e., the “Four-Fifths Rule of Thumb”) than that of the most selected group, a preliminary finding of adverse impact is likely and the employer must examine the AI tool to determine if it, in fact, has an adverse impact. If it does, the employer must show that either the use of the AI tool is job-related and consistent with business necessity pursuant to Title VII, or that the “Four-Fifths” assessment was in error.
  • Where an AI selection tool results in disparate impact, an employer may be liable even if the test was developed or administered by an outside vendor. The EEOC recommends that the employer consider asking the vendor what steps it has taken to evaluate the tool for potential adverse impact.
  • Employers should self-audit AI selection tools on an ongoing basis to determine whether they have an adverse impact on protected categories and, where it does, consider altering the tool to minimize such impact.

Employers using or considering the use of AI-based tools in selecting candidates and employees are urged to keep a close eye on developments in this ever-changing area. The Labor and Employment attorneys at Murtha Cullina will continue reporting on these developments as well.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Harris Beach Murtha PLLC

Written by:

Harris Beach Murtha PLLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Harris Beach Murtha PLLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide