Artificial Intelligence and the Americans with Disabilities Act

DarrowEverett LLP
Contact

DarrowEverett LLP

Throughout recent years, the use of artificial intelligence (“AI”) software in the recruitment and hiring of employees has rapidly increased in popularity. AI software has gained traction as a valuable tool in streamlining the hiring process, and employers have embraced it to assist them in various employee-related decisions, while regulators have been hesitant to accept the practice with open arms. With AI’s increased prevalence, federal agencies such as the Equal Employment Opportunity Commission (“EEOC”) have taken it upon themselves to assist employers in navigating the use of AI while ensuring that they comply with Title I of the Americans with Disabilities Act (“ADA”). Although AI software may be a helpful tool while making crucial employment decisions, employers need to understand its inherent risks and how to simultaneously comply with the ADA while avoiding potential liability.

Title I of the ADA—a federal civil rights law—prohibits certain actors, namely employers, from discriminating based on one’s disability. While most employers will utilize AI software developed by a third party, they assume the liability for any discrimination caused by using the software and are subsequently responsible for ensuring that their use does not discriminate against potentially disabled applicants. There are different ways in which AI software may violate the ADA, including:

  1. If the employer does not provide a “reasonable accommodation” to an individual to be rated fairly and accurately by the algorithm
  2. If the algorithm “screens out” a disabled applicant by lowering their performance on a selection criterion or preventing them from meeting it.
  3. If an assessment includes disability-related inquiries before a conditional offer of employment.

Ensuring a “Reasonable Accommodation”

When utilizing AI software in the hiring process, it is essential for employers to ensure that they provide ways in which applicants with disabilities can meet the standards required by any AI software’s testing or evaluation process. Employers are not required to lower their production or quality standards relative to a specific position or eliminate an essential job function, but they must provide reasonable accommodations to enable disabled applicants to potentially meet qualification standards. Employers can do so by supplying materials in an alternative format to assist disabled applicants in the testing process or by offering an alternative evaluation method.

Avoiding “Screening Out”

Before implementing AI tools and using them thereafter, employers should review them for possible biases. While many developers may advertise their AI software as “bias-free,” it is important that employers ensure that this includes not only things such as race, sex, or religious biases, but also avoids any type of disability discrimination. This can be done by confirming that the AI software does not measure personality, cognitive, or neurocognitive traits in a way that may screen out people with certain cognitive, intellectual, or mental health-related disabilities. Employers can comply with this requirement by conducting an independent bias audit on any AI tools they use to test for the disparate treatment of certain individuals based on protected characteristics. Although AI tools are not yet subject to specific federal regulations, it is good for employers to begin utilizing bias audits, as individual states have already started  regulating employer use of AI tools, including a recently passed law in New York City (effective January 1, 2023) that mandates bias audits for all automated employment decision tools.

Preventing Disability-Related Inquiries

While employers are permitted—and often necessitated—to utilize inquiries for health-related information, they must avoid using AI software to do so before a candidate is offered a conditional offer of employment. Employers must prevent this use as such inquiries may constitute an ADA violation even when an individual does not have a disability. Questions that would violate the ADA include those inquiries that would directly or indirectly elicit information about a disability or seek information related to an applicant’s physical and/or mental health.  Therefore, to guarantee compliance, employers should steer clear of utilizing any inquiries that could be perceived as health-related or qualify as a medical exam until after an offer for employment has been made.

Conclusion

When used correctly, the use of AI software in the hiring process is an exciting and valuable tool for employers looking to increase hiring consistency and efficiency while narrowing down large numbers of applicants in a methodical manner. However, when taking advantage of this new technology, employers must ensure that the AI software does not violate the ADA. This can be achieved by implementing reasonable accommodations that treat disabled applicants fairly, preventing unintentional or intentional screening out, and avoiding disability-related inquiries before a conditional offer of employment.

DarrowEverett LLP remains dedicated to supporting our clients’ use of emerging technologies and will continue to monitor this evolving topic in order to assist our clients in navigating relevant hurdles and compliance with applicable laws.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© DarrowEverett LLP | Attorney Advertising

Written by:

DarrowEverett LLP
Contact
more
less

DarrowEverett LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide