3 AI Bills in Congress for Employers to Track: Proposed Laws Target Automated Systems, Workplace Surveillance, And More

Fisher Phillips
Contact

Fisher Phillips

Employers that use artificial intelligence – and developers that create AI systems – could be subject to extensive new laws under several bills introduced by federal legislators. While much of the existing legal landscape on AI centers on broad, overarching principles, Congress is now considering bills that hone in on more specific issues like the workplace. We’ll outline the three bills that employers should care about most, covering issues ranging from overreliance on automated decision systems – or “robot bosses” – to workplace surveillance – or “spying bosses.”

Existing Federal AI Rules and Initiatives

Over the past several years, the federal government has ramped up its efforts to govern the development, design, and usage of AI. Here’s a sample of the laws, guidance, and standards already in place:

  • The AI in Government Act (enacted in 2020) requires the U.S. Office of Personnel Management to identify the skills and competencies needed for AI-related federal positions. The National AI Initiative Act (enacted in 2021) establishes an overarching framework for a national AI strategy and federal offices and tasks force to implement it. The AI Training Act (enacted in 2022) requires the U.S. Office of Management and Budget Director to establish or provide an AI training program for the acquisition workforce and other purposes.
  • The EEOC AI and Algorithmic Fairness Initiative (launched in 2021) requires AI tools used for hiring and other employment decisions to comply with federal equal employment opportunity laws. EEOC guidance issued in 2022 makes it clear that employers’ use of software, algorithms, and AI for assessing job applicants and employees may violate the Americans with Disabilities Act. And another EEOC technical assistance document released last year warns employers that the agency will apply long-standing legal principles in an effort to find possible Title VII violations related to the use of AI with employment-related actions.
  • The Executive Order On Safe, Secure, and Trustworthy AI (issued in 2023) contains new AI standards covering nearly every aspect of our daily lives, including many employment-related items such as initiatives to prevent AI-based discrimination. The executive order built on the White House’s Blueprint for an AI Bill of Rights (released in 2022). We previously covered the key employer takeaways in the executive order and the blueprint, which you can read here and here.

Proposed New Rules: Top 3 Bills Employers Should Know About

1. No Robot Bosses - 2419, introduced by Sen. Bob Casey (D-PA)

The aptly named “No Robot Bosses Act” would ban employers from relying exclusively on automated decision systems (ADS) to make “employment-related decisions” – which is broadly defined to include decisions at the recruiting stage through termination and everything in between (such as pay, scheduling, and benefits). The bill would protect not only employees and applicants but also independent contractors.

Employers would be barred from even using ADS output to make employment-related decisions, unless certain conditions are met, such as the employer independently supporting that output via meaningful human oversight). The bill would impose additional requirements on employers (for example, training employees on how to use ADS) and establish a Technology and Worker Protection Division within the Department of Labor.

2. Stop Spying Bosses – 262, introduced by Sen. Bob Casey (D-PA)

The “Stop Spying Bosses Act” targets (as its title suggests) invasive workplace surveillance. Technology that tracks employees – from their activity to even their location – is growing more common. This bill would require employers that engage in surveillance (such as employee tracking or monitoring) to disclose such information to employees and applicants. The disclosure would have to publicly and timely provide details about the data being collected and how the surveillance affects the employer’s employment-related decisions.

The bill also would:

  • ban employers from collecting sensitive data, such as data collection while an individual is off-duty or data that interferes with union organizing; and
  • establish a new Privacy and Technology division at the Department of Labor to enforce and regulate workplace surveillance.

3. Algorithmic Accountability – 2892, introduced by Sen. Ron Wyden (D-OR)

A proposed “Algorithmic Accountability Act” seeks to regulate how companies use AI to make “critical decisions,” including those that significantly affect an individual’s employment. For example, companies would be required to:

  • assess the impacts of automated decision systems when making critical decisions – which would include identifying (among many other factors) any biases or discrimination; and
  • provide related ongoing training and education for all relevant employees, contractors, or other agents.

The Federal Trade Commission (FTC) would be required to create regulations to carry out the purpose of the bill.

What Other Bills Are Under Consideration?

Here’s a sample of other types of bills that have been introduced:

Federal AI Framework

Proposed bipartisan legislation would provide a national framework for bolstering AI innovation while strengthening transparency and accountability standards for high-impact AI systems. Another comprehensive bill would establish guardrails for AI, establish an independent oversight body, and hold AI companies liable – through entity enforcement and private rights of action – when their AI systems cause certain harms, such as privacy breaches or civil rights violations.

AI Labeling and Deepfake Transparency

One bill aims to protect consumers by requiring developers of AI systems to include clear labels and disclosures on AI-generated content and interactions with AI chatbots. Another bill would require similar disclosures from developers and require online platforms to label AI-generated content.

Labeling deepfakes is “especially urgent” this year, according to a press release from one of the bill’s cosponsors, because “at least 63 countries which are nearly half the world’s population and are holding elections in 2024 where AI-generated content could be used to undermine the democratic process.”

AI Cybersecurity and Data Privacy Risks

Several bills target cybersecurity and data privacy issues, including a bill that would make it an unfair or deceptive practice (subject to FTC enforcement) for online platformers to fail to obtain consumer consent before using their personal data to train AI models.

What’s Next

We’ll have a much better view of the chances of any of these proposals becoming law by later this summer.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Fisher Phillips | Attorney Advertising

Written by:

Fisher Phillips
Contact
more
less

Fisher Phillips on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide