EEOC Issues Guidance on Potential Discriminatory Impact of Artificial Intelligence

Williams Mullen
Contact

Williams Mullen

Federal and state civil rights and anti-discrimination laws prohibit employment discrimination based on race, color, national origin, religion, sex, disability, age, genetic information, and other protected characteristics. The rapid progression and use of Artificial Intelligence (AI) is changing the landscape of technology that employers use in their employment practices. In this latest chapter of anti-discrimination law, the Equal Employment Opportunity Commission (EEOC) recently released its guidance on AI, titled “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964” (AI Guidance). The AI Guidance should cause employers to assess AI tools used in employee recruitment, career advancement, and employee retention practices. Doing such an assessment will help employers to evaluate whether AI tools they use could create liability under Title VII and other anti-discrimination laws based on the possibility of unintended discriminatory biases inherent in certain AI technologies.

A variety of analytical tools are available on the market today to enable businesses to work more efficiently, optimize employee performance, and reduce traditional overhead costs. In the employment setting, new AI software includes, but is not limited to, resume-screening, chatbot software for workflow, video-interviewing software, analytical software, worker management software systems, and employee-monitoring software. For several of these new AI tools, an open question remains of whether those data sets have been screened for “robot bias” which could give rise to employment discrimination claims. Employment discrimination claims arising from AI use will likely rise in the context of “disparate impact” discrimination, which applies to employment policies or practices that are facially neutral, but which have an unjustified adverse impact on members of a protected class.

The AI Guidance states that employers can assess whether an AI tool has an adverse impact on a particular group by checking whether the use of that tool “causes a selection rate for individuals in the group that is ‘substantially’ less than the selection rate for individuals in another group.” Stated another way, employers must conduct a statistical analysis to assess whether the AI has a disparate impact on members of a protected group in violation of Title VII (or other anti-discrimination laws, such as the Americans with Disabilities Act or Age Discrimination in Employment Act). The AI Guidance further states that if use of an AI tool has an adverse impact on individuals of a particular race, color, religion, sex, national origin, or individuals with a combination of such characteristics, or any other classification protected by law, then use of the tool would violate Title VII, and the burden would shift to the employer to show that such use is “job-related and consistent with business necessity.”

Importantly, the AI Guidance clarifies that employers can be responsible under Title VII for AI tools designated or used by a vendor, which may include situations where the employer has relied on the results of a selection procedure used by a software vendor or staffing agency’s AI. This, however, is no different than the historical position the EEOC has taken with other third-party vendors who perform pre-employment screening for criminal backgrounds, credit histories, etc. The EEOC recommends that employers that are deciding whether to rely on a software vendor to develop or administer an algorithmic decision-making tool may want to ask the vendor, at a minimum, whether steps have been taken to evaluate whether the use of the tool causes a substantially lower selection rate for individuals protected by Title VII. According to the EEOC, even if the vendor is incorrect in its own assessment about whether the tool results in disparate impact discrimination, the employer could still be liable under Title VII.

In the AI Guidance, the EEOC appears to suggest that employers may use the four-fifths statistical rule to assess potential disparate impact liability. The four-fifths rule, referenced in the AI Guidance, can serve as a litmus test to assess whether the selection rate for one group is “substantially” different than the selection rate in another group. The rule states that one rate is substantially different from another if their ratio is less than four-fifths (or 80%). The EEOC clarifies, however, that this test is merely a “rule of thumb” and “may be inappropriate in certain circumstances.” 

In summary, the concern for algorithmic bias in AI employment screening and evaluation tools warrants careful consideration by employers. Employers may wish to consult with an attorney who is knowledgeable on these issues prior to using AI in their employment screening, hiring, promotion, or other policies or practices. 

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Williams Mullen | Attorney Advertising

Written by:

Williams Mullen
Contact
more
less

Williams Mullen on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide