Navigating the Intersection of AI and Human Resources: Strategies for Mitigating Legal Risks*

Bodman
Contact

According to a recent study by the Society for Human Resource Management, 25% of organizations use artificial intelligence (“AI”) to assist with human resources functions, and another 25% anticipate doing so by the end of 2024.

There is no doubt that AI can save time and increase efficiency. The real questions are whether those rewards are worth the risk and what can be done to reduce the risk inherent in the use of AI.

AI generates its responses by combining large amounts of data from across the globe – meaning that some of what AI “learns” does not comply with U.S. law, let alone Michigan law.

For instance, in August 2023, the Equal Employment Opportunity Commission (“EEOC”) settled an age discrimination claim against iTutorGroup, Inc. for $365,000. iTutorGroup is a China-based company that hired tutors throughout the United States to provide online English language tutoring. The EEOC alleged that iTutorGroup utilized AI to screen applicants, and that the software automatically rejected women who were 55 or older and men who were 60 or older.

Similarly, an HR technology company, Textio, recently analyzed ChatGPT’s ability to effectively engage in HR-related tasks. Textio first asked ChatGPT to write job posts. While the initial posts were very generic (and not very engaging), the more specific the prompts became, the more biased the posts were – many leaning slightly masculine, and many evidencing age bias.  Textio then asked ChatGPT to write basic performance feedback for a range of professions. This test demonstrated significant gender bias – with ChatGPT using female pronouns in the reviews of the kindergarten teacher, receptionist, and nurse, and using male pronouns in the reviews of the mechanic and construction worker.

However, another interesting trend was found:  the reviews for the female employees were 15% longer than the reviews for male employees, and much of the additional feedback was critical. Where ChatGPT was asked to “write feedback for an engineer who stops at nothing to hit his/her goals,” the totality of the male engineer’s feedback was two paragraphs of positive commentary. The female engineer’s feedback contained the exact same two paragraphs, followed by two additional critical paragraphs, adding comments such as, “However, it’s important for [Employee Name] to recognize the importance of balancing her drive for success with the needs of the team and organization.” Further, it included the suggestion that she “be mindful of the importance of work-life balance” and “be willing to listen and consider other ideas and feedback.”

Clearly, the biases present in our world also become present in AI-generated work.  There is no doubt that human resource professionals can use AI tools to increase efficiency, but there is similarly little doubt that liability and compliance risks abound.

What can be done to control these risks?

  1. Recognize the risk. There is no way to mitigate risk if you are unaware of it. Employers and HR professionals must recognize, and many already do, that using AI in seemingly harmless ways (like to draft a generic job posting) may still result in unlawful bias or discrimination.
  2. Balance AI usage with a human element. Train employees to review and revise AI-generated work product. AI should not be the final decision maker.
  3. Conduct audits on an annual basis. The audits should be designed to ensure that AI usage is continuing to support business needs and is being used in a non-discriminatory and unbiased manner.
  4. Incorporate all of the above into a written policy. The policy will drive compliance and be an important defensive tool. (While HR needs policies, so do employees – see here.)

 Thank you for your patience and understanding.

*The title of this Workplace Law Alert was generated using ChatGPT.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Bodman | Attorney Advertising

Written by:

Bodman
Contact
more
less

Bodman on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide