ADA Ex Machina: The EEOC Issues Guidance on the ADA and the Use of Artificial Intelligence in Making Employment Decisions

Dechert LLP
Contact

Dechert LLP

[co-author: Julia Canzoneri]*

Background

The United States Equal Employment Opportunity Commission (EEOC) on May 12, 2022, issued a series of questions and answers that address how employer use of artificial intelligence (AI), algorithms, and software in hiring and employment practices may risk violating the Americans with Disabilities Act (ADA). See EEOC Technical Assistance Document, “The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees,” EEOC-NVTA-2022-2 (May 12, 2022) (available here).

This recently issued federal guidance appears to be the next step in the EEOC's endeavor to regulate employer use of AI and algorithm-based technology more heavily. In October 2021, the EEOC began an initiative to investigate and monitor the use of AI and information technology in hiring and employment practices. The EEOC's initiative was intended to examine and prevent bias in hiring and employment perpetuated by AI and algorithmic decision-making tools. When the initiative was announced, EEOC Chair Charlotte A. Burrows stated that "bias in employment arising from the use of algorithms and AI falls squarely within the Commission's priority to address systemic discrimination."

The EEOC’s action is consistent with current trends in many jurisdictions. In recent years, state and local legislative initiatives have been moving towards increased monitoring and preventing discriminatory use of technology in hiring and employment practices. In December 2021, New York City passed a law requiring employers to submit AI and algorithm-based technology to "bias audits" if employers wish to use such technology to assess employment applicants or evaluate employees for promotions. This law will go into effect on January 2, 2023. In addition, multiple states have passed legislation regulating employer use of biometric data. Maryland and Illinois, in particular, have passed legislation prohibiting employers from collecting facial recognition data from applicants without first obtaining applicants' consent.

The Americans with Disabilities Act (ADA)

The ADA generally prohibits discrimination against employees and job applicants on the basis of disability. Under the ADA, a physical or mental impairment is considered a "disability" where the impairment, if untreated, would "substantially limit" one or more "major life activities." The ADA requires that employers provide a qualified employee or applicant with a disability with a "reasonable accommodation" that would allow the individual to perform the essential functions of the position. In addition, under the ADA, employers may not pose "disability-related inquiries" or seek information that qualifies as a "medical examination" before giving applicants conditional employment offers, nor may employers make such inquiries or require such examinations during employment unless doing so is “job-related and consistent with business necessity.” According to the EEOC, disability-related inquiries include questions that directly inquire as to whether an individual has a disability, as well as questions that are likely to elicit information about an individual's disability. Questions may be considered a "medical examination" if they inquire into information regarding an individual's physical or mental health and impairments.

Use of AI and Algorithms in Hiring and Employment

According to the EEOC, employers use AI and algorithmic decision-making technology in a variety of ways in the hiring process and in making employment-related decisions. While such tools and software have not replaced employers’ personal evaluations of applicants and employees, AI and algorithmic decision-making tools are playing an increasing role in how individuals are assessed for employment positions. As an example, where previously employers may have reviewed resumes individually during each step of the application process, resume scanners can now be used to prioritize resumes that contain certain keywords. Similarly, video interviewing software may replace in-person interviewing to evaluate applicants’ speech patterns and expressions, allowing for greater efficiency in the hiring process. Additionally, while promotional opportunities may in the past have been granted based on consideration of primarily subjective qualities, employers may now use employee monitoring software which evaluates employees based on more “objective” factors, such as keystrokes. Although the use of technology-aided processes is often in furtherance of laudatory goals, such as eliminating the implicit and explicit biases that can accompany subjective decision-making, the EEOC has cautioned that employers must be cognizant of the risks of disability-based discrimination that can arise in connection with the use of such processes.

The EEOC guidance provides descriptions of what constitutes software, algorithms, and AI, and provides examples of each in the hiring and employment context. In the employment context, such software includes resume-screening software and chatbot software for hiring and workflow. Algorithms are instructions a computer follows to achieve a particular task or goal, and in the employment context may involve algorithmic decision-making tools used throughout hiring and employment assessment. In hiring and employment, AI may include machine learning and natural language processing in procedures such as video interviews.

In its guidance, the EEOC identifies three broad ways in which the use of AI and algorithmic decision-making tools may lead to discrimination in hiring and employment practices. First, an employer may discriminate against an applicant or employee by failing to provide a reasonable accommodation for an applicant or employee's disability. Second, using algorithmic decision-making tools may "screen out" qualified applicants on the basis of disability during the hiring process. Third, employers may discriminate against applicants by posing disability-related inquiries or seeking information that qualifies as a medical examination before giving the applicant a conditional offer of employment.

The EEOC guidance discusses each potential area of discrimination in greater detail and provides examples of how each form of discrimination under the ADA may occur in the context of hiring and employment. These examples include:

  • Failure to provide a reasonable accommodation, such as extended time or an alternative version of a knowledge test, to an applicant who has limited manual dexterity as a result of a disability who takes a knowledge test that requires the use of a keyboard or trackpad;
  • Use of video interviewing software that is intended to analyze an applicant's problem-solving ability through speech pattern may screen out qualified applicants with a speech impediment;
  • Use of a personality test that delves into an employee’s “optimism” may screen out an applicant with a mental illness such as Major Depressive Disorder who responds to such questions negatively;
  • Use of a chatbot programmed to reject applicants who indicate they have a significant gap in employment history, where the gap is the result of a disability, which would cause the chatbot to screen out the qualified applicant because of their disability; and
  • Use of an algorithmic decision-making tool that inadvertently identifies applicants with medical conditions, and what those conditions are, which would violate the ADA if the tool was implemented in this way prior to an employer giving the applicant a conditional offer of employment.

The EEOC has emphasized that such actions need not be purposeful on the part of the employer to qualify as discrimination under the ADA. According to the EEOC, employers are also responsible for how other entities, such as third-party software vendors, use their algorithmic decision-making tools to assess applicants and employees on the employer's behalf. For example, an employer may be liable for the inaction of a third-party software vendor where the employer has contracted with that vendor to administer a pre-employment knowledge test, an applicant has informed the third-party vendor that a disability makes it difficult to take the pre-employment test, and the vendor does not provide a reasonable accommodation to the applicant.

Practical Guidance for Employers

The EEOC has provided a summary of “promising” steps that employers can take to prevent discriminatory hiring and employment practices when using AI and algorithmic decision-making tools, such as failing to provide reasonable accommodations to applicants and employees with disabilities, screening out qualified applicants and employees during assessments and evaluations, and being held liable for the discriminatory actions of third-party entities. These include:

  • Implementing training for staff to recognize and process requests for reasonable accommodation as quickly as possible and to develop alternative methods of rating applicants and employees where evaluation methods may be inaccessible or may unfairly disadvantage an individual who has requested a reasonable accommodation due to a disability;
  • Instructing third-party entities such as software vendors to forward all reasonable accommodation requests to the employer or enter into agreements with such third parties requiring the entities to provide such reasonable accommodations on the employer’s behalf;
  • Using algorithmic decision-making tools that have been designed to be accessible to individuals with disabilities;
  • Attempting to proactively limit chances that the technology will disadvantage individuals with disabilities by informing applicants and employees that reasonable accommodations are available for those with disabilities;
  • Using algorithmic decision-making tools that only measure abilities or qualifications that are truly necessary for the job; and
  • Measuring necessary abilities or qualifications directly, rather than measuring characteristics or scores that are correlated with those abilities or qualifications.

Any employer that is considering implementing algorithmic or other technology-aided processes in the hiring and promotion processes must be familiar with the EEOC guidance. Employers should assess any AI, algorithmic decision-making tools, or other software that they use to evaluate applicants and employees, as well as review such technology that third parties, such as software vendors, have designed and administer for hiring and employment practices on the employer’s behalf. Employers should also review their use of technology, such as chatbots, resume-screening software, or video interviewing software, to measure or assess applicants and employees, to ensure that these tools do not operate in ways that may be inadvertently biased against applicants or employees with disabilities, or that otherwise violate the ADA.

 

*Law Clerk

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Dechert LLP | Attorney Advertising

Written by:

Dechert LLP
Contact
more
less

Dechert LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide