Emerging Themes Relating to the Use of Artificial Intelligence in the Workplace

King & Spalding
Contact

King & Spalding

Series 4, 10 in 10: Issue 1

Companies are increasingly exploring the use of artificial intelligence (AI) and automated decision-making technologies to manage human capital, including for recruitment, hiring and performance purposes. Vendors offer automated recruiting tools that assist employers with hiring, including by locating talent, sifting through job applications, interviewing candidates, communicating with applicants, providing analytics on recruiting efforts and offering personalized benefits packages; or managing and engaging their workforces, including training, communicating company policies, identifying employees for promotions and monitoring employee engagement or performance. Although such AI tools make determinations based on seemingly neutral factors, there is a risk that their outputs may unintentionally discriminate against protected groups, resulting in unlawful disparate impacts. Additionally, the use of AI may exacerbate privacy concerns relating to data collection, maintenance and unauthorized use.

In reaction to these developments, legislative efforts have accelerated as federal, state and local governments are expressing increasing interest in regulating this new frontier. For example, the Equal Employment Opportunity Commission (EEOC) recently launched its Artificial Intelligence and Algorithmic Fairness Initiative “to ensure that the use of software, including [AI] machine learning, and other emerging technologies used in hiring and other employment decisions comply with the federal civil rights laws that the EEOC enforces.” The agency has also identified the discriminatory use of AI and automated systems as a priority in its Draft Strategic Enforcement Plan for 2023-2027.

Additionally, in October 2022, the White House Office of Science and Technology Policy published the Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People, which sets forth “a set of five principles and associated practices to help guide the design, use, and deployment of automated systems to protect the rights of the American public in the age of artificial intelligence.” The Blueprint focuses on AI’s safety and effectiveness, protecting against discrimination, data privacy, notice issues, and individuals’ right to access human alternatives.

EMERGING THEMES GOVERNING THE USE OF AI IN THE WORKPLACE

A review of recently proposed and enacted legislation reveals three key themes that will impact how companies use AI in the workplace moving forward: (1) transparency, (2) anti-discrimination and (3) system and data integrity.

(1) Transparency

A common aspect of AI legislation is a requirement that employers provide notice to employees or candidates when AI is used to make employment decisions. Notice is a key requirement of New York City's law regulating AI screening and Illinois' law targeting AI used to analyze video interviews.1

In addition to notice, employers may be required to obtain consent. For example, consent is a significant feature of Maryland’s recent law, which requires employers using facial recognition during the interview process to obtain signed written consent from candidates.

Finally, employers may be required to provide additional disclosures when making an adverse employment action (i.e., denial of employment eligibility) based in part on the use of Al. Earlier this year, Washington, D.C. introduced a bill that would require employers to disclose the factors used in employment determinations involving AI while also establishing a candidate’s right to submit corrections and request a re-evaluation by a human.

(2) Anti-Discrimination

Given well-publicized concerns regarding AI’s potential for bias, it is no surprise that proposed and enacted legislation generally prohibits the use of AI in a manner that results in unlawful discrimination.2 At the federal level, the EEOC recently provided updated technical assistance on employee selection procedures under Title VII of the Civil Rights Act of 1964, which advises employers of their responsibility to evaluate their algorithmic decision-making tools for disparate impact, including by considering the “four-fifths” rule. The four-fifths rule is a “general rule of thumb” whereby the selection rate for one group is considered “substantially different than the selection rate of another group” if the ratio of the selection rates “is less than four-fifths (or 80%).” The EEOC has also cautioned employers against using algorithmic decision-making tools in a way that violates the Americans with Disabilities Act by advising employers to avoid “screen[ing] out an individual with a disability” who could otherwise do the job with a reasonable accommodation, or “violat[ing] restrictions on disability-related inquiries and medical examinations.”3 At the state level, the Illinois General Assembly has similarly introduced a bill prohibiting employers from considering an applicant’s zip code to avoid potential discrimination in the hiring process, while California’s Civil Rights Division is revising its rules to explicitly state that the use of AI or automated systems can violate existing anti-discrimination law.4

(3) System and Data Integrity

To protect against bias and privacy concerns, legislative and administrative bodies have enacted or proposed measures requiring employers to regularly assess the effectiveness and integrity of AI systems, including through audits, testing, reporting and recordkeeping. In some jurisdictions, employers’ regular audits and testing of AI systems must assess whether the system results in bias towards certain individuals, privacy risks (i.e., with respect to identifying information) or other known harm or material negative impact.5 Employers may be required to publicly disclose the results of these audits, report the results to specified governmental agencies and/or retain records of completed audits for a specific period.6

Additionally, California’s Civil Rights Council is proposing new rules that would permit employers to use evidence of anti-bias testing as a defense to allegations of a discriminatory adverse impact of AI or automated systems.7

COMPANIES SHOULD ANTICIPATE FURTHER REGULATIONS ON THE USE OF AI IN THE WORKPLACE

As jurisdictions continue to enact and introduce new AI-related legislation, employers using AI must take steps to ensure that they provide transparent notice and to avoid discriminatory impacts. In addition, employers should be mindful to stay abreast of new developments in this area of the law, which is rapidly evolving at the federal, state and local levels. In the absence of comprehensive federal regulation, employers will be required to navigate a patchwork of state and local laws with varying compliance requirements.

1See, Automated Employment Decision Tool, N.Y.C. Admin. Code (“N.Y.C. AEDT Law”) § 20-871(b) (Requires notice to employees or candidates who have been screened using an automated employment decision tool); Artificial Intelligence Video Act (“IL AI Video Interview Act”), 820 Ill. Comp. Stat. 42/5 (Requires disclosure to applicants that AI may be used to analyze their video interview).

2U.S. Equal Emp. Opportunity Comm’n, The Americans with Disabilities Act and the Use of Software, Algorithms, and Artificial Intelligence to Assess Job Applicants and Employees, EEOC.gov (“EEOC, ADA, Algorithms, and AI”); Cal. Civ. Rights Council, Proposed Modifications to Emp. Regs. Regarding Automated-Decision Systems, (“CA ADS Proposal”) (proposed); SDAA, D.C. Council, B25-114 (D.C. 2023).

3EEOC, ADA, Algorithms, and AI.

4Limit Predictive Analytics Use Bill, 130rd Gen. Assemb. H.B. 3773, (Ill. 2023); CA ADS Proposal, Cal. Code Regs. Tit. 2. § 11009(f) (proposed).

5N.Y.C. AEDT Law § 20-871 (Requires automated employment decision tools to undergo a bias audit within one year of use); 6 N.Y.C.R. § 5-301 (eff. July 5, 2023) (same); H.R. 6580, 117th Cong. § 4(a) (introduced).

66 N.Y.C.R. § 5-303 (eff. July 5, 2023) (requires audit summary to be posted on the employment section of the company’s website for at least 6 months after the latest use of an automated employment decision tool); IL AI Video Interview Act, 820 ILCS 42/20 (Requires employers who rely solely upon AI intelligence for video interview to determine selection for in-person interview to report applicant demographic data annually); SDAA, D.C. Council, B25-114, Sec. 7 (D.C. 2023) (Would require covered entities to submit bias audits annually and to maintain records for at least five years); CA ADS Proposal, Cal. Code Regs. Tit. 2 § 11013(c) (Would require data from automated decision systems to be maintained for four years).

7CA ADS Proposal, Cal. Code Regs. Tit. 2. § 11009(f).

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© King & Spalding

Written by:

King & Spalding
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

King & Spalding on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide