Why Using AI in Employment Practices Can Bring Unwanted Risk

Tonkon Torp LLP
Contact

Tonkon Torp LLP

Artificial intelligence (AI) has become ubiquitous and its impact in the employment arena is notable. Using AI tools in employment can bring efficiency, particularly in the hiring process, but the use of these time-saving tools is not without risk. The appropriate and thoughtful use of these tools will assist employers; but how employers can safely incorporate AI, however, requires exploration.

AI is being used for recruitment and candidate screening. This includes sourcing potential candidates by scanning social media pages to identify potential fits. It also includes screening candidates from existing candidate pools by examining resumes for educational and experiential backgrounds. AI is further used in screening candidates who get interviewed by using tools to measure the candidate’s strengths based on facial expressions, speech patterns, body language, and vocal tone. AI tools can also be used to monitor job performance, such as by measuring key strokes and other factors.

Beyond saving time, AI can bring additional benefits to employers and hiring departments, including enhanced objectivity, reduced bias, and better decision-making. However, the numerous risks of using AI should cause employers to tread cautiously as they incorporate it into their workflow.

For one thing, using AI can raise privacy concerns. If an employee’s personal information is stored in an AI system, employers should have policies in place to obtain employee consent and ensure the information is appropriately secure. Further, the use of AI in tracking performance may raise issues addressed by state privacy and electronic surveillance laws.

Using AI can also breed concerns regarding lack of transparency and interpretability. If using AI makes it difficult to articulate why a particular candidate was not chosen, this can lead to extra challenges for employers defending discrimination cases based on a failure to hire.

In fact, the potential for bias and discrimination is one of the biggest risks associated with using AI in the hiring process. In one recent case, the EEOC filed a consent decree in New York, settling age discrimination claims against a company for $365,000 as a result of the application software rejecting female applicants over the age of 55 and male applicants over the age of 60.

The EEOC has addressed these matters, publishing guidance on how the Americans with Disabilities Act relates to AI in the workplace, as well as on adverse impact discrimination through the use of AI. Importantly, the EEOC has said that employers facing Title VII or ADA discrimination claims as a result of using AI cannot blame third-party AI vendors as a defense. No matter who designed the software, if an employer uses it and it results in discrimination, the employer can be liable. For this reason, employers should have their contracts with AI vendors include an indemnification provision and a requirement of cooperation by the vendor if the AI tool becomes the object of litigation.

The EEOC case discussed above involved “disparate treatment” (intentional) discrimination. However, recent EEOC guidance is more focused on “disparate impact” discrimination, which results when employers apply a facially neutral standard for employment decisions that nevertheless has a disproportionate adverse effect on individuals in protected classes. The EEOC guidance encourages employers to assess the disparate impact of any AI tool they use. If an employer realizes its AI will have an adverse impact, it should take steps to reduce the impact or use a different tool. Failure to adopt an available, less discriminatory algorithm is a basis for liability.

Some states, such as New York, have proposed legislation that would require using independent auditors to review the impact of AI software, and require employers to share the audit results with its employees prior to implementing the software. Given the wide geographical net that more and more employers are casting in our hybrid work world, knowing the employment AI laws of each state is increasingly critical.

AI software can also result in disparate impact discrimination by adversely viewing gaps in employment, which may disproportionately affect women who have stayed home to raise children. In general, to the extent the AI attempts to mirror the attributes of successful employees and select candidates meeting those attributes, there is a risk of adverse impact discrimination as workplaces strive to become more representative.

EEOC guidance on the ADA and AI highlights even more AI-related pitfalls for employers. Does the software provide an opportunity for the applicant or employee to request a reasonable accommodation if needed? Does the algorithm, through its predictive processes, unintentionally violate restrictions on disability-related inquiries and medical examinations? Does the tool intentionally or unintentionally screen out individuals with disabilities even though the individual may be able to do the job with a reasonable accommodation?

These and other issues are essential for employers to consider before adopting any AI software. While AI brings great promise, it can also be a litigation minefield if poorly implemented. Employers should proceed with caution and consult with counsel as needed.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Tonkon Torp LLP | Attorney Advertising

Written by:

Tonkon Torp LLP
Contact
more
less

Tonkon Torp LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide