EEOC Guidance Tackles AI and Other Advanced Technologies in Employment Decision Making

Kelley Drye & Warren LLP
Contact

Artificial intelligence (AI) promises new efficiencies in making employment decisions: instead of human eyes having to review stacks of resumes, an algorithm-based selection process aids in making a “rough cut” based on objectively desirable characteristics. This ought to reduce the opportunity for human bias—read “discrimination”—to enter into the process. For the same reason, an employer’s use AI to identify candidates based purely on objective standards minimizes a candidate’s ability to allege that the decision considered any protected status such as their race, religion or national origin—in theory, at least.

Regulators have asked a legitimate question, however: what if the AI algorithm looks for characteristics that disproportionally, even if unintentionally, impact one kind of legally-protected status more than some other class? Consider this example: during a Zoom interview, AI reads facial expressions to capture information about mood, personality traits, and even honesty. (Yes, this is a thing.) What if an applicant has limited facial movement because of a stroke? Would that potentially impact AI’s assessment of a candidate’s “mood”?  (Hint: yes, it would.)

It is exactly these unintended impacts that has motivated the U.S. Equal Employment Opportunity Commission’s (“EEOC”) recent guidance, published on May 18, 2023, clarifying that even if an employer is utilizing AI to make employment decisions, the protections of Title VII still apply. The EEOC previously addressed similar issues in the context of the Americans with Disabilities Act (“ADA”) in a guidance document this time last year.

The AI-related issues under Title VII and the ADA are similar in some ways, but also distinct. Title VII prohibits discrimination based upon race, color, religion, sex or national origin. The ADA prohibits discrimination based upon a person’s disability. The previous ADA guidance focused primarily on intentional and unintentional bias against disabled applicants, such as a decision-making algorithm failing to consider whether an applicant may be able to perform the role with a reasonable accommodation. In this respect, issues under the ADA may be more individualized to specific applicants and employees. For additional commentary on the previous ADA-targeted guidance and initiatives in other jurisdictions, please be sure to review our prior blog on the topic.

By contrast, the AI-related issues under Title VII are broader, and concern whether a specific tool may cause a disparate impact (or, “adverse impact”) on members of a protected class.

What Does this Mean for Employers?

Employers are free to use AI and other algorithmic-based methods to screen applicants or identify problem employees; however, this technology does not insulate the employer from discrimination claims. Employers have an obligation to reasonably understand how this technology works, and ensure that it is being used in a way that does not disparately impact any protected class without a justifiable basis.

Disparate Impact Analysis

Generally, a policy or employment decision is considered to have a disparate impact when it disproportionately excludes or targets members of a protected group. This can be true even when a policy is facially-neutral, such as requiring all applicants to have a minimal level of education.

The EEOC’s recent guidance states that if use of AI or a similar tool selects individuals in a protected group “substantially” less than individuals in another group, then the tool will violate Title VII unless the employer demonstrates that the methodology is job related and consistent with business necessity.

The EEOC also addressed what it considers to be “substantial” in terms of any disparate impact analysis. Traditionally, the EEOC has relied upon the “four-fifths rule,” which sets a baseline for whether the selection of one group over another may be disproportionate. (For example: if 60% of all White applicants are selected for a position, and 30% of all Black applicants are selected, then the process would violate the four-fifths rule, because 30 divided by 60 is less than 4/5). In the recent guidance the EEOC retreated from the four-fifths rule, stating that it is merely a “rule of thumb,” and that it may be inappropriate in some circumstances. Although this rule has been applied by courts and the EEOC in the past, the EEOC explicitly warns employers: “the EEOC might not consider compliance with the rule sufficient to show that a particular selection procedure is lawful under Title VII when the procedure is challenged in a charge of discrimination.

It will be important for employers to consult with counsel before taking any action that may disparately impact a certain group, and reliance upon the four-fifths rule by itself may not be sufficient to mitigate any enforcement action by the EEOC.

What if a Vendor or Outside Consultant Creates and Implements the Tool?

The EEOC guidance states that an employer who uses AI or a similar tool to make employment decisions may be held responsible even if the tool was created or implemented by a third-party, such as a software vendor. The guidance emphasizes that employers have an obligation when deciding to utilize a vendor to question the vendor on whether the tool has been evaluated to ensure it does not target a specific protected group.

From a practical perspective, managers and human resources professionals overseeing these types of decisions may not understand the technical aspects of the tool, and cannot be expected to become experts in AI overnight. However, these decision makers should consult with counsel and their own information technology experts as necessary, and be sure to vet any vendors thoroughly. To the extent that management or human resources recognizes that the tool may be yielding an adverse impact on a protected group, alternative approaches should be considered as soon as possible.

Likelihood of Enforcement

The EEOC often seeks to take on novel cases in new areas of the law that are likely to create employee-friendly court rulings and maximize deterrence. Employers should expect the EEOC and similar state and local agencies to target discrimination issues based upon employers’ use of AI and other algorithmic technology for enforcement actions. The EEOC’s emphasis on disparate impact issues also means that any enforcement action could incorporate a large population of employees or applicants, inherently increasing the “dollar value” for any lawsuit or settlement (in turn, maximizing the deterrence value to the government).

This initiative from the EEOC, together with the enactment of state and local laws like New York City’s Local Law 144 (discussed here), should signal to employers that this is a hot button issue likely to gain even more attention as AI becomes a part of everyday life and decision-making. Employers utilizing AI and other advanced tools to make employment decisions can do so confidently, so long as appropriate controls are put into place to ensure compliance with existing employment laws.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Kelley Drye & Warren LLP | Attorney Advertising

Written by:

Kelley Drye & Warren LLP
Contact
more
less

Kelley Drye & Warren LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide