10-Step Plan for Employers Using Artificial Intelligence in Employment Processes

Cooley LLP

Artificial intelligence has transformed the way we live, work and even think. While AI offers seemingly endless potential benefits in the workplace – including improvements in efficiency, cost cutting and innovation – employers must balance those benefits with the legal risks of using AI tools in employment processes. Employers using these tools also must attempt to keep up with regulatory and technological developments in a rapidly evolving space.

Below, we’ve outlined a 10-step plan for employers using AI-based tools in employment decision-making and related processes to best mitigate against the risks of using such tools. We’ve focused on the broad umbrella of tools powered by AI, including those using machine learning and/or natural language processing, that are used in employment processes. These tools have become the center of focus from agencies such as the US Equal Employment Opportunity Commission (EEOC), due to some tools’ incorporation of software that uses algorithmic decision-making at different stages of the employment process.

1. Identify the technology

As a preliminary matter, employers need to identify existing AI technology used in the employment decision-making process, including how they are using it, and what technologies they may want to implement in the future. According to EEOC Chair Charlotte Burrows, more than 80% of employers use AI in some of their employment decision-making processes, but many employers might not realize the ubiquity and broad scope of tools using such technologies. For example, AI technology may be used in sourcing and screening candidates, interviewing, onboarding, performance management, succession planning, talent management, and even diversity, equity and inclusion (DEI) activities. Some examples include résumé scanners, employee engagement/monitoring software, virtual training programs, “virtual assistants” or “chatbots,” video interviewing software, “job fit” or “cultural fit” testing software, and trait- or ability-revealing applicant gaming systems.

2. Understand the role of human oversight

It is critical for employers to understand the role of human oversight in the use of AI tools. Employers should ensure that a tool does not replace human judgment and any final decisions continue to be made by HR or management. Human oversight is not only advisable from a legal perspective, but it also may mitigate distrust and employee morale issues arising from concerns of overreliance on AI technologies in employment decision-making.

3. Vet vendors, tools and data

Vetting the vendor

The explosion of AI tools designed for use in employment processes means that vendors will need to be carefully vetted as a threshold matter. Employers may want to consider whether the vendor’s developers receive any training on detecting and preventing bias in the design and implementation of such tools. Employers also may wish to consider whether vendors regularly consult with diversity, labor, or other outside experts to address and prevent bias issues, and they should be wary of any claims of “bias-free” or “EEOC-compliant” tools, as these representations have no legal effect. In addition, employers should take a close look at any purchasing contracts made with vendors, with particular focus on how potential liability in connection with the tool’s use will be allocated.

Vetting the tool

Employers should thoroughly vet any tool they wish to implement, including understanding how the tool works and how it makes any recommendations or conclusions. As a preliminary matter, employers should consider the track record of the tool, including if and how long the tool in consideration has been used by other employers and the purposes for which they have been utilized. Employers also should understand if and how the tool was developed for individuals with physical and mental disabilities – and ask whether any interface is accessible to individuals with disabilities, whether materials presented are available in alternative formats, and whether vendors attempted to determine whether using an algorithm disadvantages individuals with disabilities, such as where characteristics measured by the tool are correlated with certain disabilities. Some tools may improperly “screen out” individuals with disabilities (including visual disabilities), and employers should ask vendors how the tool mitigates or provides accommodations for that issue. In addition, screen outs can occur if tools, such as chatbots, are programmed to reject all applicants who have gaps in their employment history, when such gaps may have resulted from disability-based reasons.

Vetting the data

Understanding the data that the AI tool has been trained on is a critical part of vetting any AI tool. Prior to using a tool, employers should mitigate any risk that a tool is a “proxy” for impermissible discrimination. Such discrimination can occur where the data a tool is trained on is itself biased (e.g., an employer’s existing nondiverse employee population), leading to potentially biased results (i.e., the “garbage in, garbage out” problem). Employers also may consider what statistical analyses have been run on the tools and how such analyses were selected.

4. Assemble the right team

Several federal agencies recently asserted that many automated systems operate as “black boxes” whose “internal workings are not clear to most people and, in some cases, even the developer of the tool” and this “lack of transparency often makes it all the more difficult for developers, businesses, and individuals to know whether an automated system is fair.” To mitigate against this asserted black box problem in the workplace, employers should ensure that they assemble a multidisciplinary team tasked with implementing and monitoring any AI tool. This team should not only comprise members from human resources, legal, communications, marketing and DEI functions, but also members of IT, including those with backgrounds in software or data engineering. Assembling the right team with the appropriate experience will help ensure all players understand, are aligned on, and are able to explain the business goals tied to using AI tools and how the tool reaches particular decisions or predictions. Employers may want to designate a team member tasked with monitoring trends and technology developments in this evolving space.

5. Know the applicable laws

While federal government regulation, for the most part, is still playing catch-up to the rapid advancement of AI technologies, employers using such technologies already are subject to numerous federal and state anti-discrimination, intellectual property, cybersecurity and data privacy laws. In the employment space in particular, federal anti-discrimination laws – including Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act, and the Age Discrimination in Employment Act – collectively prohibit disparate treatment discrimination (intentional discrimination against members of a protected class) and disparate impact discrimination (facially neutral policies or practices that discriminate in practice against members of a protected class). In addition, US states and local jurisdictions may impose more protective anti-discrimination laws. The use of AI tools in certain instances can trigger compliance risks under other federal employment laws, such as the National Labor Relations Act and the Fair Credit Reporting Act. Federal contractors should note that the Office of Federal Contract Compliance Programs recently revised its scheduling letter and itemized listing to require employers to provide information and documents relating to the use of “artificial intelligence, algorithms, automated systems or other technology-based selection procedures.”

Some jurisdictions specifically regulate certain AI technologies used in the workplace. For example, as we reported in a May 2023 client alert, New York City recently began enforcing the City’s Automated Employment Decision Tools (AEDT) Law, which imposes several requirements for employers using a qualifying AEDT, including conducting an independent bias audit of the AEDT and making available certain information about data collected by the tool. The Illinois Artificial Intelligence Video Interview Act also imposes notification, consent, deletion and reporting requirements for jobs based in Illinois. Maryland House Bill 1202 similarly requires applicant consent for employers to use facial recognition technology during pre-employment job interviews. Employers navigating this complex area will need to ensure that their use of AI tools complies with all applicable laws.

6. Have appropriate policies in place

Employers should consider whether to implement policies identifying and addressing appropriate use of AI technologies in employment processes. In a policy, employers should be transparent about how the tool operates, what data is being used, and how – if at all – the tool assists with decision-making processes. With clear language identifying how such tools are used, employees and applicants can be better informed, and employment decisions such as hiring and promotion can be perceived as more fair. Any applicable policies should be communicated and updated regularly.

7. Implement training and education

Any AI use policies should be communicated to employees, preferably through training and education programs. Management-level employees also should receive education and training on AI tools, including applicable legal requirements regulating the use of such tools, the potential for tools to perpetuate bias or discrimination if used improperly, the importance of human oversight, and concerns regarding incorrect or misleading outputs.

8. Ensure accommodations are available

Employers using AI tools should prepare their managers and HR teams to recognize and evaluate accommodations requests from applicants and employees. Even though some laws, such as New York City’s AEDT Law, do not explicitly require that employers provide an accommodation (only that individuals are provided notice that they may request an accommodation or alternative selection process), accommodations are required under the federal ADA and New York City and state human rights laws. The EEOC and the White House have cautioned against screening out candidates based on inaccessibility to a human alternative to an AI tool.

9. Conduct regular testing and audits

Once deployed, AI tools should be evaluated and regularly monitored to ensure that business objectives for using the tool continue to be met, the tool is implemented in a fair and unbiased manner, and any adjustments may be made. Even if the tool has been subject to an audit prior to being used, employers should continue to conduct such audits at least on an annual basis, as the implementation of the tool in any particular workforce can result in unforeseeable issues. For instance, the White House’s Blueprint for an AI Bill of Rights suggests that automated systems be regularly monitored for “algorithmic discrimination that might arise from unforeseen interactions of the system with inequities not accounted for during the pre-deployment testing, changes to the system after deployment, or changes to the context of use or associated data.” Audits also should be conducted in coordination with legal counsel.

10. Pay attention to the rapidly evolving AI landscape

Employers, especially those operating in multiple jurisdictions, need to stay up to date on potential laws and regulations regarding AI in employment processes. For example, new laws similar to – or even more stringent than – NYC’s AEDT law have been proposed in New York state, New Jersey, California, Massachusetts and Washington, DC, while other states have created task forces to advise and propose regulations governing such tools in the employment context. In California, a new state agency – the California Privacy Protection Agency – is tasked with addressing automated decision-making technology. Although there is no current comprehensive federal legislation, several bills and frameworks are in consideration, such as Senate Majority Leader Chuck Schumer’s SAFE Innovation framework and the No Robot Bosses Act.

Conclusion

AI tools will continue to revolutionize the workplace. Employers should keep on top of these rapid developments and implement best practices for mitigating legal risk in using such tools. 

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Cooley LLP | Attorney Advertising

Written by:

Cooley LLP
Contact
more
less

Cooley LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide