The Good, the Bad and the Ugly of AI in the Workplace

DRI
Contact

DRI

[co-authors: Mark A. Fahleson, Tara L. Paulson, Anne R. Jenkins]*

Artificial intelligence “is one of the most profound things that humanity is working on. It’s more profound than... electricity or fire.”

-Sundar Pichai, Chief Executive Officer, Google

Charged with the mission of operating beyond the boundaries of civilization with minimal support and no communication from higher authority, they lived and often died by the motto, “Order first, then law will follow.”

-Thomas W. Knowles, They Rode for the Lone Star

Successful businesses strive to stay ahead of the curve and competition by employing innovative strategies. Law firms and lawyers are often tasked with advising those businesses so they can avoid a legal ambush. The hottest innovation—artificial intelligence (AI)—is being employed by organizations across the globe to increase productivity, create efficiencies, and stimulate creativity. As lawyers and business advisers, we can’t be tenderfoots or yellow bellies. In the words of John Wayne, “we’re burnin’ daylight!” Now is the time to strap on your boots, hop on the saddle, and giddy-up to traverse the AI frontier.

Here is what we know: AI is rapidly impacting the workplace in unprecedented ways. Currently, the U.S. seems to have chosen a Wild West approach, with AI being used with little regulation and no oversight. Where does this leave employers, whose employees are using AI with increasing frequency, whether employers want to admit it or not? Can employers navigate and establish some semblance of order to mitigate legal risks while lawmakers and regulators play “catch up?” This article attempts to map out the AI terrain with respect to workplace law and highlights developing legal issues in front of employers or just over the horizon.

What Is AI?

Before navigating how employers can play a role in establishing order in the AI “corral” we must first survey the artificial intelligence frontier. Broadly speaking, AI refers to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention.” Shukla Shubhendu & Jaiswal Vijay, Applicability of Artificial Intelligence in Different Fields of Life, INT’L J. OF SCI. ENG’G AND RSCH., Sept. 2013, at 28. This technology makes “decisions which normally require a human level of expertise” and helps people anticipate problems or deal with them as they arise. Id. In simpler terms, AI consists of tools that perform functions we usually associate with human minds. Generally, there are three characteristics that constitute the essence of AI: (1) intentionality; (2) intelligence; and (3) adaptability. Darrell M. West & John R. Allen, How artificial intelligence is transforming the world, Brookings (Apr. 24, 2018), https://www.brookings.edu/articles/how-artificial-intelligence-is-transforming-the-world/

AI algorithms are designed by humans with intentionality and often use real-time data to make instant decisions. Id. Unlike passive machines, which are capable of mechanical or predetermined responses, AI uses inputs to combine information from a variety of sources, analyze the material instantly, and act on the insights derived from that data. Id. Additionally, AI results in intelligent decision-making by using machine learning and data analytics to take data and look for underlying trends. Id. The third quality that marks AI systems is the ability to learn and adapt as they compile information and make decisions. Id.

While AI has been evolving since the 1950s, it is recent advancements in generative AI such as ChatGPT that is creating all of the hype. Generative AI can generally be described as artificial intelligence that is capable of creating something new, using a data set and learning from the underlying patterns to generate new data from underlying data set. B. Marr, The Difference Between Generative AI and Traditional AI: An Easy Explanation for Anyone, Forbes (July 24, 2023).

What is ChatGPT?

ChatGPT (https://chat.openai.com/) is a generative AI chatbot developed by OpenAI that was publicly released in November 2022. The name "ChatGPT" combines "Chat," referring to its chatbot functionality, and "GPT," which stands for Generative Pre-Trained Transformer. ChatGPT interacts with users in a conversational way, and can follow complex instructions in natural language, solve difficult problems with accuracy, answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. Currently, ChatGPT 3.5 is offered free of charge with a more robust version available via monthly subscription.

Most employers don’t know it, but many of their employees are already using ChatGPT to perform work duties. According to a March 2023 survey of U.S. employees conducted by TalentLMS, 70 percent of those surveyed have used ChatGPT for work purposes. Eri Panselina, Survey: AI is the future, but only 14% of employees are being trained on the tools (Aug. 27, 2023), https://www.talentlms.com/blog/ai-at-work-chatgpt-survey/. Survey participants who used ChatGPT at work found it most useful for: (1) writing content (36%); analyzing data and information (33%), customer support (30%), brainstorming and developing new ideas (27%), scheduling and task prioritization (23%), navigating tough conversations (22%), writing code (20%) and making strategic decisions (19%).

ChatGPT and other generative AI models aren’t going away and will only become more robust. If our frontier now includes these AI models, we must turn our attention to how these models impact businesses.

Current AI Use in the Workplace

Many employers already use algorithmic decision-making tools to assist them in making employment decisions, including recruitment, selection, retention, and performance monitoring. These tools are sometimes developed internally or more often supplied by a third-party vendor. Employers utilize these tools with the goal of saving time and effort, increasing objectivity, and optimizing employee performance.

Examples of algorithmic decision-making tools currently in use include:

  • Job applicant tracking systems (ATS) that screen job applications by scanning resumes and cover letters for specific keywords and qualifications, matching candidates with job requirements, and providing insights for hiring decisions or making “cut” decisions outright.
  • “Virtual assistants” or “chatbots” that ask applicants about their qualifications and automatically reject those who do not meet predefined requirements.
  • Video-interviewing software that evaluates job applicants based on their responses, facial expressions and speech patterns to evaluate skills and predict job performance.
  • Employee monitoring tools that use algorithms to monitor employee activities, such as internet usage, email communications, or keystrokes. The systems have become prevalent in the COVID-19/WFH (work from home) environment.

Recent Attempts to Rope In the Wild, Wild West of AI

With more and more employers using AI-based tools to assist in human resource functions, the Equal Employment Opportunity Commission (EEOC) and National Labor Relations Board (NLRB) have stepped in as “sheriffs” to regulate the use of AI in the workplace.

Employment Discrimination

In 2021, the EEOC launched the Artificial Intelligence and Algorithmic Fairness Initiative to ensure that the use of software, including AI, machine learning, and other technologies used in employment decisions comply with federal civil rights laws the EEOC enforces. Artificial Intelligence and Algorithmic Fairness Initiative, U.S. Equal Emp. Opportunity Comm’n, https://www.eeoc.gov/ai (last visited Aug. 3, 2023). As part of the initiative, the EEOC has released technical assistance documents to provide guidance on these technologies and their legal implications.

It is the EEOC’s position that employers are generally liable for the outcomes of using algorithmic decision-making tools in selection procedures to make employment decisions. The term “algorithmic decision-making tool” is used by the EEOC to broadly refer to all types of systems that might include AI used in the employment process, such as those described above. Selection procedures are the methods employer’s use to make employment decisions such as hiring, promotion, and firing applicants or current employees.

AI and the Americans with Disabilities Act

On May 12, 2022, the EEOC issued its first technical guidance titled, “The American with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees.” U.S. Equal Emp. Opportunity Comm’n (May 12, 2022), https://www.eeoc.gov/laws/guidance/americans-disabilities-act-and-use-software-algorithms-and-artificial-intelligence. On July 26, 2023, the EEOC issued a narrower technical guidance for visual disabilities, entitled, “Visual Disabilities in the Workplace and the Americans with Disabilities Act.” U.S. Equal Emp. Opportunity Comm’n (July 26, 2023), https://www.eeoc.gov/laws/guidance/visual-disabilities-workplace-and-americans-disabilities-act#q16.

In its guidance, the EEOC identifies three common ways that an employer’s use of algorithmic decision-making tools could violate the ADA:

  1. When the employer does not provide a “reasonable accommodation” that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm;
  2. The tool intentionally or unintentionally “screens out” an individual with a disability; and
  3. The tool violates the ADA’s restrictions on disability-related inquiries and medical examinations. 42 U.S.C. § 12112(b).

Although they do not carry the weight of law, the EEOC identified “promising practices” that employers may use to comply with the ADA when using algorithmic decision-making tools. They include:

  1. Training staff to recognize and process requests for reasonable accommodation as quickly as possible.
  2. Training staff to develop or obtain alternative means of rating job applicants and employees when the current process is inaccessible or otherwise unfairly disadvantages someone who has requested a reasonable accommodation because of a disability.
  3. If the employer is using a third-party vendor, such as a testing company, asking the vendor to forward all requests for accommodation promptly to be processed by the employer in accordance with ADA requirements. Alternatively, the employer could enter an agreement with the vendor requiring it to provide reasonable accommodations on the employer’s behalf in accordance with the ADA.
  4. Using tools designed to be accessible to individuals with as many different kinds of disabilities as possible.
  5. Informing all job applicants and employees who are being rated that reasonable accommodations are available for individuals with disabilities and providing clear and accessible instructions for requesting such accommodations.
  6. Describing, in plain language and in accessible formats, the traits that the algorithm is designed to assess, how those traits are assessed, and the variables or factors that may affect the rating (thus alerting those with disabilities that the technology might not accurately assess their qualifications).
  7. Asking the vendor, before purchasing, to confirm that the tool does not ask job applicants or employees questions that are likely to elicit information about a disability or seek information about an individual’s physical or mental impairments or health, unless such inquiries are related to a request for reasonable accommodation.

AI and Title VII Disparate Impact Discrimination

On May 18, 2023, the EEOC released technical guidance entitled, “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” U.S. Equal Emp. Opportunity Comm’n, https://www.eeoc.gov/select-issues-assessing-adverse-impact-software-algorithms-and-artificial-intelligence-used (last visited Aug. 3, 2023). According to the guidance, selection procedures, including the use of these tools, should be evaluated under Title VII and, specifically, the disparate impact theory. Title VII prohibits employment discrimination based on race, color, religion, sex, and national origin. 42 U.S.C. § 2000e-2(a). Disparate impact occurs when a neutral test or selection procedure disproportionately excludes individuals with a protected characteristic, and the test or selection procedure is not “job related for the position in question and consistent with business necessity.” Id. at § 2000e-2(a)(2), (k).

Under Title VII, disparate impact occurs when an employer: (1) uses a selection procedure that has a disparate impact on a protected characteristic; (2) cannot establish that the selection procedure is job-related for the position in question and consistent with business necessity; and (3) has a less discriminatory alternative procedure available. Id. at § 2000e-2(k). The EEOC relies upon the Uniform Guidelines on Employee Selection Procedures (Guidelines) under Title VII. 29 C.F.R. § 1607. These Guidelines provide support for employers to determine if their tests and selection procedures are lawful under Title VII. Id. Applying the Guidelines to the tools, employers can be held responsible under Title VII for the use of such tools, even if the tools are designed or administered by a third party.

Employers can mitigate these risks by monitoring for disparate impact when using algorithmic decision-making tools. According to the EEOC, employers should:

  1. Regularly monitor the use of algorithmic decision-making tools for adverse impact by checking whether the selection rate for one group is “substantially” different than the selection rate of another group. One group is substantially different than another if their ratio is less than four-fifths (or 80%). 29 C.F.R. §§ 1607.4(D), 1607.16(B).
  2. If monitoring the use of a tool demonstrates an adverse impact, the employer should determine whether the use of the tool is job-related and consistent with business necessity. 42 U.S.C. § 2000e-2(k)(1); 29 C.F.R. § 1607.3(A).
  3. If the use of the tool is job-related and consistent with business necessity, the employer should still explore less discriminatory alternatives and adopt alternatives if available. See 42 U.S.C. § 2000e-2(k)(1)(A)(ii).
  4. If the employer is relying on a vendor or third party to develop or administer a tool, the employer should ask the vendor how they determine whether use of the tool causes a “substantially” different selection rate for individuals with a protected characteristic under Title VII. U.S. Equal Emp. Opportunity Comm’n, Compliance Manual Section 2 Threshold Issues § 2-III.B.2 (May 12, 2000), https://www.eeoc.gov/laws/guidance/section-2-threshold-issues#2-III-B-2.
  5. If while developing the tool, the employer discovers that using it would have an adverse impact, the employer should reduce the impact or adopt an alternative tool. The new EEOC guidance explains that employers may be liable for failure to adopt a less discriminatory algorithm that was considered during the development process. See 42 U.S.C. § 2000e-2(k)(1)(A)(ii).

AI and ADEA Disparate Impact Discrimination

The EEOC’s concerns regarding the discriminatory impact of AI tools is not limited to the ADA and Title VII. Similar claims can arise under the other laws within the EEOC’s jurisdiction, including the Age Discrimination in Employment Act of 1967 (ADEA) 29 U.S.C. §621 et seq.

On May 5, 2022, the EEOC filed suit against three integrated companies that provide English-language tutoring services to students in China alleging the defendant employer used application software that automatically rejected female applicants over the age 55 and male applicants over the age 60. (EEOC v. iTutorGroup, Inc. et al, Case No. 1:22-cv-2565, E.D.N.Y)). On August 9, 2023, parties filed a joint notice of settlement and a request for approval and execution of a consent decree. This appears to be the first settlement of an AI discrimination lawsuit brought by the EEOC and there are likely more to come.

National Labor Relations Board

The EEOC is not the only “sheriff” flashing its badge and targeting employer use of AI. The National Labor Relations Board (NLRB or Board), which enforces the National Labor Relations Act of 1935 (NLRA), 29 U.S.C. §§151-169, has identified certain emerging technologies employing AI as potentially running afoul of Section 7 of the NLRA.

Section 7 of the NLRA, which covers most private-sector employers irrespective of whether they are unionized or not. It guarantees employees "the right to... engage in... concerted activities for the purpose of.... mutual aid or protection” as well as the right to refrain from such. 29 U.S.C. §157. Depending upon the partisan makeup of the five-member NLRB, the Board has expansively read Section 7 to protect such activities as employees discussing their wage rates with coworkers, see, e.g., Lowes Home Centers, L.L.C. v. NLRB, 850 F. App'x 886, 887 (5th Cir. 2021), and employee comments on social media regarding workplace complaints. Hispanics United of Buffalo, Inc., 194 L.R.R.M. (BNA) 1303, 2012–13 NLRB Dec. (CCH) P 15656, 2012 WL 6800769 (N.L.R.B. 2012).

On October 31, 2022, the General Counsel of the National Labor Relations Board (NLRB) released Memo Number GC 23-02. Nat’l Lab. Rel. Bd., https://www.nlrb.gov/guidance/memos-research/general-counsel-memos (last visited Aug. 3, 2023). The memo, entitled “Electronic Monitoring and Algorithmic Management of Employees Interfering with the Exercise of Section 7 Rights,” urges the Board to find electronic monitoring and automated or algorithmic management practices illegal under settled Board law, if these practices interfere with protected activities under Section 7 of the [NLRA]” Id.

The memo opines that an employer violates the NLRA “where the employer’s electronic surveillance and management practices, viewed as a whole, would tend to interfere with or prevent a reasonable employee from engaging in activity protected by the [NLRA].” Id. Under the memo’s proposed framework, an employer can avoid a violation of the NLRA if it can establish that the practices at issue are narrowly tailored to address a legitimate business need and the practices “outweigh” employees’ Section 7 rights. Id. The employer must also disclose to employees the technology used, the reason for its use, and how it uses the information obtained. Id. An employer is relieved of this obligation only if it can show “special circumstances” require “covert use” of the technology. Id.

Technologies subject to this memo include those used to monitor and manage employees, including wearable devices, security cameras, radio-frequency identification badges, GPS tracking devices, cameras, and computer software that takes screenshots, webcam photos, or audio recordings. Id. Like the EEOC guidance, the memo noted that technologies such as resume readers and other automated selection tools used during hiring and promotion may also be subject to GC 23-02. Id.

State and Local Regulation

In addition to federal agencies, state and local legislators and regulators have saddled up to corral employer use of AI. In 2023, several states considered legislation to corral the use of AI, including New York, Illinois, California, Connecticut, New Jersey, Rhode Island, Vermont, and Washington, D.C. Generally, these laws seek regulate the use of automation and AI in hiring and employment decisions, like requiring bias audits and certain notices to be posted.

For example, in December 2021 New York City enacted the first law in the nation to expressly regulate the use of automation and AI in hiring decisions, although enforcement was delayed until July 2023. The law, known as NYC Local Law 144, requires employers that use AI software in employment decisions to audit those tools annually for potential race and gender bias, and then publish the results on their websites. These results are called impact ratios, which tie into the recent EEOC guidance. Employers in violation are subject to civil monetary penalties of $375 for the first violation and up to $1,500 per violation a day for subsequent violations.

Cuttin’ Off Employer Risk

With AI quickly becoming ubiquitous on an evolving legal range, at a minimum, lawyers advising businesses should consider the following:

  • Saddle up and take ChatGPT for a ride. In order to advise clients on AI’s legal implications it is important that attorneys generally understand how it works, what it does and what it doesn’t do. Start with ChatGPT. For example, ask the chatbot to prepare a performance improvement plan for an employee, identifying characteristics about the employee, the performance deficiencies and expectations going forward. It that is too boring for you, ask ChatGPT to provide a response mirroring a specific voice, such as Matthew McConaughey:

  • Stand lookout and keep watch. The law is playing catch up with AI. Federal, state and local governmental bodies and agencies will likely move expeditiously to regulate AI use by employers. In recent years, state and local governments have been more active than the federal government with respect to new employment laws and regulations. For multi-state employers, this poses additional challenges, making it even more critical for management-side attorneys to keep abreast of AI-related legal developments.
  • Counsel clients to consider implementing their own “law.” Employers can mitigate risks by adopting their own AI specific employee policies outlining allowed and prohibited uses. For example, companies ranging from Amazon and Apple to Wells Fargo and Verizon have adopted policies prohibiting or restricting employee use of ChatGPT and other generative AI platforms. Aaron Mok, Amazon, Apple, and 12 other major companies that have restricted employees from using ChatGPT, Bus. Insider, July 11, 2023. https://www.businessinsider.com/chatgpt-companies-issued-bans-restrictions-openai-ai-amazon-apple-2023-7 When creating a policy, employers should consider: (1) Are there any permitted uses and, if so, what?; (2) What uses are prohibited? (3) If employees have questions regarding permitted uses, who is the decisionmaker within the organization to whom they can inquire. In addition to the adoption and publication of an employee policy, employers should consider regular training on the risks of using ChatGPT and other external generative AI tools and the company’s policy.

Conclusion

AI use by employers and employees ain’t goin’ away, making it imperative that business counsel understand the basics of AI—including generative AI chatbots like ChatGPT. While the current legal landscape looks a bit like the lawless Wild West, employment and labor law are already evolving to adapt to AI use in the workplace, with more regulation and litigation to come. As a generative AI chatbot put it in response to a query for cowboy slang on our next steps, “Get a wiggle on, gird your loins, saddle up and ride!”

* Mark A. Fahleson is a partner with Rembolt Ludtke LLP in Lincoln, Nebraska, where he practices employment and labor law. Fahleson presently serves as a member of DRI’s Law Institute and recently served as Program Chair for the 2023 National Foundation for Judicial Excellence Judicial Symposium. He is a past Chair of the DRI Employment & Labor Law Committee.

Tara L. Paulson is a partner with Rembolt Ludtke LLP in Lincoln, Nebraska, where she practices employment and labor law. Tara is an active member of DRI’s Employment & Labor Law Committee and frequent presenter on workplace law issues, including workplace investigations. She currently serves as her firm’s Chief Executive Officer.

They are grateful for the assistance of Anne R. Jenkins, a 2L at the University of Nebraska-Lincoln College of Law, for her research and assistance with this article.

Written by:

DRI
Contact
more
less

DRI on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide