Pre-Employment Testing Principles Apply to AI

Sherman & Howard L.L.C.
Contact

Sherman & Howard L.L.C.

The Equal Employment Opportunity Commission (EEOC) recently released limited technical guidance on employer use of artificial intelligence (AI) in hiring, promotion, and firing decisions, extending principles long-applied to less cutting-edge preemployment testing and screening to the more recent AI phenomenon.

The guidance focused specifically on possible employer liability for disparate impact caused by the use of AI. Disparate impact is a form of discrimination, prohibited by Title VII and other employment laws, that can occur when a neutral test, policy, or procedure disproportionately screens out individuals in protected classes. Unlike disparate treatment discrimination, an employer can be liable for disparate impact without having any intent to discriminate. As noted in the guidance, assessing possible disparate impact issues in selection decisions focuses on three questions.

1.  Does the test, policy, or procedure have a disparate impact on one or more protected classes?

2.  If a disparate impact exists, can the employer nevertheless show that the test, policy, or procedure is job related and consistent with business necessity?

3.  If the employer can show that the test, policy, or procedure is job related and consistent with business necessity, does a less-discriminatory alternative exist (in which case the employer might be legally required to use that alternative)?

In presenting its guidance on these questions in the context of AI, the EEOC referred to its Uniform Guidelines on Employee Selection Procedures, originally adopted in 1978 -- practically the Dark Ages as far as technology goes. Specifically, the EEOC reaffirmed that its "four-fifths rule" remains the general rule of thumb for determining whether a selection process results in a disproportionate impact. If the ratio of the selection rate for one class of applicants to another class of applicants is less than four-fifths (80%), the selection rates are "substantially different" and could reflect disparate impact discrimination.

The guidance emphasized some key points for employers to consider in assessing possible disparate impact issues in the use of AI for selection.

  • The four-fifths rule is a general rule; disparate impact can still exist even when the four-fifths rule is satisfied.
  • Employers can be liable for disparate impact resulting from the use of a third party vendor's AI tool. The employer using the tool has the obligation to ensure that its use does not result in unlawful discrimination. If the vendor gets it wrong, the employer is on the hook.
  • More than one algorithm may be effective for purposes of a selection tool. An employer developing its own AI tools may be obligated to evaluate all effective algorithms and use the least discriminatory alternative.

Selection is just one employment area in which AI can be useful -- and problematic. Click here to read Carissa Davis and Melissa Reagan's overview of AI in employment.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Sherman & Howard L.L.C. | Attorney Advertising

Written by:

Sherman & Howard L.L.C.
Contact
more
less

Sherman & Howard L.L.C. on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide