AI Bias in the Workplace: Top 4 Takeaways From EEOC Commissioner’s Conversation at FP Conference

Fisher Phillips
Contact

Fisher Phillips

Most HR professionals are no strangers to technology, particularly when it comes to using applicant tracking systems and human resource information systems to hire workers and track key employment data. However, recent innovations — such as ChatGPT and other generative AI tools — are changing HR processes and have the potential to both create and eliminate workplace biases. So, what are the major impacts on employers when these tools are used to make hiring, workforce development, and other employment decisions? EEOC Commissioner Keith Sonderling joined Fisher Phillips’ Chairman and Managing Partner John Polson at the recent Fisher Phillips AI Strategies @ Work Conference to discuss the most critical issues facing employers – and we’ve summarized their discussion for those who missed out. 

1. AI Will Change the Workplace – And Employers Need to Adapt As Well

One of the biggest issues HR teams will face from a workforce development perspective is how GenAI will change jobs. For instance, AI is impacting workers in many fields from the manufacturing frontlines to corporate headquarters. Historically, knowledge workers who hold advanced degrees might not have been concerned about how AI was going to impact their jobs because they saw AI use as a function of automation, but ChatGPT is affecting everyone.

Regardless of how technology evolves, however, employers still have some basic HR and legal decisions to make. For example:

  • Is AI going to change the way your organization gets work done? If so, are you going to invest in upskilling and reskilling your workforce in response to these changes?
  • Are you going to conduct layoffs? If so, who will be impacted?
  • How will these changes affect older workers, workers with disabilities, women, and those from underrepresented groups?

As more businesses utilize generative AI, women and diverse groups are expected to lose their jobs at disproportionate rates. The key, said Sonderling, is to train your workforce on new developments and ensure you’re taking appropriate steps to account for the employees who may be most affected by evolving technology.

2. Employers Deploying AI Tools Need to Monitor for Bias

Many of the workplace issues we’re talking about from a technology perspective are still core HR issues. We’re just applying the existing framework to new tools that we’ve never dealt with before. Consider the following:

AI Can Help Curb Bias

Technology is commonly used to streamline the hiring process. Sonderling noted that AI tools can actually help employers with diversity, equity, and inclusion if both carefully designing and properly used. But if not, they can cause problems.

Bias can exist at the earliest stages of the hiring process. Even just from reading a job candidate’s name on a resume, the hiring manager can make assumptions based on gender, national origin, race, or religion. If AI is used in the right way, however, it can remove that kind of bias by looking only at the candidate’s skills and experience, rather than their name and other personally identifying information.

For example, some employers are using apps to do the first round of interviews, which can prevent bias decisions that hiring managers make based on visual cues. When a job applicant walks into an interview, what do you see? “You see the person,” Sonderling said. That means you inevitably see a lot of things the EEOC says you’re not allowed to base an employment decision on, such as race, gender, disability, or pregnancy. But by initially using an app and removing those visual cues, you can make initial decisions based on who you think the best candidate for the job is going to be.

But You Should Proceed with Caution

Although AI tools may help eliminate bias, they might also create it. What if the app-based interview system isn’t good at picking up accents? Imagine a job candidate with a German accent applying for a grocery store manager position. If the app records their response to a question about how they would handle a screaming customer but only picks up 50% of what they said because of their accent, the German candidate might score lower than other applicants even if they gave a better response. This could lead to a claim of national origin discrimination.

Keep in mind that AI-powered systems are built by humans and use a system of judgment that generally reflects human characteristics. For instance, if a company is seeking to hire individuals who reflect the characteristics of the company’s already-successful employees and is trained using those employees’ data, the existing demographics of that company may impact any results provided by an AI-analytics tool.

Beyond Hiring

Although HR teams commonly incorporate AI tools into their hiring process, don’t forget that technology bias can also impact current employees. As an example, the EEOC has said that bias can occur when using employee monitoring software that rates employees based on their keystrokes or other factors.

Notably, the anti-discrimination laws that the EEOC enforces cover all terms and conditions of employment. For example, Title VII of the Civil Rights Act prohibits employers from discriminating against job candidates or employees “with regard to any term, condition, or privilege of employment” in areas such as “recruiting, hiring, promoting, transferring, training, disciplining, discharging, assigning work, measuring performance, or providing benefits.”

New Technology, Same Rules

The EEOC has made clear that existing agency regulations can apply to situations where employers use AI-fueled selection procedures in employment settings. The agency said this is especially true in “disparate impact” situations – where employers may not intend to discriminate against anyone but deploy any sort of facially neutral process that ends up having a statistically significant negative impact on a certain protected class of workers.   

3. EEOC’s Plan for Addressing Potential AI Bias in the Workplace

Sonderling reminded employers that the EEOC isn’t focused on regulating technology. Rather, the agency is going to look at whether the use of technology in the workplace resulted in unlawful employment discrimination. “That’s our profession, and that’s what we know best,” he said.

Whether a manager or computer made a bias employment decision, the employer is ultimately liable. That’s why employers should consider working with their vendors from the start to ensure they are using the products correctly and reducing the potential for bias decision-making. “We’re going to look at the results,” Sonderling said.

Was technology used to intentionally discriminate based on a protected characteristic? Did the AI tool have a disparate impact on certain groups? Federal anti-bias laws will apply to employment decisions made with AI in the same way they do to decisions made without the use of technology.

4. How to Stay Compliant as You Explore New Technology

Consider creating a strong corporate governance framework related to AI use in the workplace and taking the following actions:

  • Conduct Audits. The EEOC recommends that employers test all employment-related AI tools early and often to make sure they aren’t causing legal harm. In New York City, a new local law requires employers to conduct a bias audit if they use AI tools to hire and promote employees in the city. Audits must be done before using a new automated employment decision tool and annually thereafter to assess disparate impact based on race, ethnicity, and sex. Although the new law applies only to New York City, employers might consider performing a broader audit for all locations and assessing for potential bias based on all protected characteristics. “Proactively doing audits helps with liability,” Sonderling said,
    “because if you can find the issues, you can fix them before there’s continued discrimination.”
  • Create Robust Policies. Have you developed a corporate statement and employee handbook policies related to AI? You may want to authorize only certain people to use the HR tools after they are trained by the vendor and certified at a certain level. A strong policy will cover legal compliance, state that you’re not going to use it to discriminate, and explain that you will take swift disciplinary action if the AI tools are used inappropriately. Let employees know who to contact if they have any concerns and let them know they will not be retaliated against for reporting actions they perceive as biased. Having these policies and practices in place helps you to show that you have a culture of compliance if something goes wrong.
  • Provide AI Training to Employees. A best practice is to train decisionmakers and ensure all employees know whether and how they may use AI tools in the workplace. Make sure they are familiar with your policies and practices and consider having your vendors work with key staff to explain how to properly use your AI programs.
  • Be Proactive. If a federal investigator shows up to your worksite, it’s helpful to show that you carefully selected a product; the vendor trained your relevant employees on how to use the technology; and you developed robust policies on its use, conducted audits, and took swift action to address any misuse. Having that governance in place can put you in a better position with the EEOC.

Conclusion

Don’t forget that you should approach any self-audit with the help of legal counsel. Experienced legal counsel can help guide you about the best methodologies to use and assist in interpreting the results of any audit. Additionally, using counsel can potentially shield certain results from discovery under attorney-client privilege. This can be especially beneficial if you identify changes that need to be made to improve your process to minimize any unintentional impacts.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Fisher Phillips | Attorney Advertising

Written by:

Fisher Phillips
Contact
more
less

Fisher Phillips on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide