Benefits and Legal Risks of Using Generative AI in Hiring Offers

Foster Swift Collins & Smith
Contact

Foster Swift Collins & Smith

Foster Swift attorneys Tony Dalimonte and Michael Cassar recently teamed-up for a presentation on the potential legal risks related to the use of artificial intelligence (AI) tools in the recruiting and hiring process.  

The information session, which is summarized below, is part of the firm’s Second Wednesday series of monthly presentations on relevant business and legal topics.

Use of AI-Powered Tools for Recruiting and Hiring is Widespread

It seems like AI is in the news every day and being discussed everywhere. We regularly see and read about the many ways it will change and transform the nature of our lives and occupations.

One area in business that has moved rapidly in adopting and adapting AI is human resources. In a recent survey, over 60 percent of HR professionals at U.S. firms reported that they extensively use AI-assisted tools for recruiting and hiring.

Human resources departments report that AI-assisted tools can greatly reduce the time and cost of hiring by parsing resumes, matching keywords, scheduling interviews, conducting screening procedures and performing related tasks.

The EEOC and Legal Risks Related to Utilizing AI-Assisted Tools in Recruiting and Hiring

There is little doubt that AI-driven tools can be of benefit and cost-savings in the human resources environment. But employers also need to be mindful of the numerous and potentially expensive legal risks associated with the use of AI in screening and evaluating job candidates.

The Equal Employment Opportunity Commission (EEOC) is the agency responsible for enforcing federal laws making it illegal to discriminate against a job applicant or employee because of the person’s protected characteristics (race, color, religion, sex, gender identity, etc.). It has made its position clear that use of AI in hiring and recruiting could violate Title VII of the Civil Rights Act of 1964 and the Americans with Disabilities Act (ADA). 

Accordingly, the EEOC has issued two guidance documents which specifically address how AI-assisted tools and processes can violate federal law:

I. AI Hiring Tools and Potential Violations of Title VII

The EEOC’s Title VII guidance emphasizes that employers should carefully consider the potential of AI-powered tools they utilize to deliver results which discriminate against certain job candidates based on protected characteristics. The guidance also makes clear that employers are responsible for ensuring their use of AI tools does not violate Title VII – and that employers who fail to take steps to mitigate any potential for discrimination could be held liable for violating the law.

The EEOC outlined several instances where the use of AI tools during the hiring process could possibly trigger Title VII violations:

  • Resume scanners that prioritize applications based on periods of employment, which could incorrectly penalize women for periods of non-employment due to pregnancy and/or child rearing.
  • Virtual assistants or chatbots that ask job candidates about their qualifications and reject those not meeting pre-defined qualifications.
  • Video interviewing software that evaluates candidates based on their facial expressions or speech patterns.
  • Testing software that provides “job fit” scores for applicants regarding their personalities, aptitudes, cognitive skill and “cultural fit” based on their performance on a game or a more traditional test.

The guidance also reminds employers to be especially vigilant in regard to facially neutral AI-tools which can have a disparate impact (disproportionately negative) on a protected group, such as a racial or ethnic minority.

Even if an AI tool is not intentionally discriminatory, it can still have a disparate impact if not intelligently designed or applied.

The agency again suggests employers be proactive and apply the “four-fifths” rule to determine if a questionable AI-assisted employment selection procedure is having a disparate impact on a protected group. The four-fifths rule states that a selection rate for a protected group that is less than 80 percent (four-fifths) of that for the group with the highest selection rate is considered evidence that disparate impact is likely occurring.

II. AI-Assisted Hiring Tools and Potential Violations of the ADA

The EEOC’s guidance on possible ADA violations resulting from the use of AI-assisted tools used in the hiring process states employers must ensure that any technology or processes they use doesn’t discriminate against individuals with disabilities. This means that employers would be well advised to:

  • Inform all job candidates upfront that reasonable accommodations related to any aspect of the hiring process can be provided, including technologies such as screen-reading software, Braille displays, text-to-speech systems, etc.
  • Avoid asking any medical- or disability-related questions of job candidates.
  • Ask the vendors of your AI-assisted hiring tools if they can verify that their products and services are not in violation of Title VII or the ADA.
  • Document and keep records of all agreements and understandings you have with the vendors of AI tools regarding the legal and regulatory compliance of any software or algorithmic processes being used by your firm.
  • Take the lead in assessing the fairness and legality of the technology-assisted tools used in your hiring process. At the end of the day, employers are liable for any violations of federal law or damages resulting from illegal or improper application of AI tools used for recruitment and hiring.

Conclusion: AI Potential is Limitless, But Caution is Advised

The capability of AI to streamline and greatly increase the efficiency and effectiveness of the hiring process has been clearly demonstrated. But employers should still proceed cautiously in utilizing these powerful technologies.

Although the potential of AI appears boundless, its tendencies to discriminate against protected classes and cause harm are well documented. Competition and progress dictate that AI tools will be commonly used throughout the workplace, but employers must take the initiative and take steps to ensure that individual rights are protected.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Foster Swift Collins & Smith | Attorney Advertising

Written by:

Foster Swift Collins & Smith
Contact
more
less

Foster Swift Collins & Smith on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide