AI Trends For 2026 - Navigating the Patchwork of Laws and Risks When Using AI Tools for Employment

MoFo Tech
Contact

MoFo Tech

Employers are increasingly deploying AI tools to streamline key HR functions, from resume screening to performance management and other workforce functions. In 2026, these tools are no longer experimental; some companies are embedding them in core HR functions. At the same time, lawmakers, regulators, and plaintiffs’ attorneys are rapidly escalating scrutiny of employers’ use of AI, looking for ways to regulate and pursue claims related to these tools. With the growing patchwork of AI laws and risks, employers need to understand how best to navigate the emerging compliance challenges and legal risks when using AI tools in their workforce.

The following are a few key issues that employers should watch in 2026.

Complying with Expanding and Conflicting AI Laws: Navigating the rapidly expanding and evolving patchwork of AI laws in the U.S. and abroad is a significant challenge for domestic and global companies. In the U.S., many states and cities have passed or are considering issuing their own AI laws. Many of these laws will just be going into effect in 2026, including Illinois HB 3773 amending the Illinois Human Rights Act (effective January 1, 2026), Colorado’s Artificial Intelligence Act (effective June 30, 2026), and the amendments to the California Consumer Privacy Act (effective January 1, 2026). These laws often have numerous (and sometimes differing) requirements, including requirements for bias audits, risk assessments and policies, disclosure/notice requirements, opt-out rights, data retention, and government reporting.

Complicating matters, the Trump administration just issued an Executive Order, attempting to significantly restrict states from regulating AI in a cumbersome manner that conflicts with federal priorities to make the U.S. an AI innovation leader. Whether or how much this Executive Order will slow or prevent the growing patchwork of state AI regulations remains to be seen.

In the absence of clear federal preemption, employers must determine how to create cohesive strategies and policies for compliance when using AI tools in their workforce.

Rising Risk of Discrimination Claims: The trend of regulators and plaintiffs’ attorneys pursuing discrimination claims against employers for their AI tools will likely continue in 2026. Employers must ensure their AI tools do not discriminate against employees or applicants, such as using AI tools to screen candidate resumes. “The algorithm did it” is not a defense. Employers remain fully responsible for the outcomes of the AI tools they deploy, regardless of whether those tools are developed in-house or by third-party vendors.

Ensuring Compliance with Existing Employment Laws: Employers should also confirm that any AI tool they use complies with existing employment laws. For example, employers have to ensure they consider appropriate disability accommodations related to AI tools and hiring assessments. Similarly, AI tools can present various wage and hour risks if not managed carefully, such as an AI system replacing Fair Labor Standards Act (FLSA) exempt employee tasks that could void an employee’s FLSA exempt status. Unionized employers must also consider whether they need to bargain with union representatives before implementing AI tools that may affect the terms and conditions of employment for bargaining unit employees. Employers that treat AI risk as a purely “new” legal issue often overlook these legacy compliance traps.

Employers can navigate these (and other) issues by having good policies and controls when using AI for HR purposes. A few items employers should consider doing include:

  • Developing AI Governance Policies: Employers should implement clear policies and guidelines governing when and how AI can be used in recruiting, hiring, tracking worker time and productivity, and other HR functions. These policies should also include discussions about proper and meaningful human oversight, the requirements for approving AI tools for use, acceptable use cases, legal requirements that need to be considered (e.g., notice obligations), and requirements for vendor contracts for AI tools.
  • Vetting Vendors: Vendors should be required to be transparent about how their AI systems work, including how their tools have been bias tested in accordance with legal requirements and best practices and are accessible to disabled individuals. Rote assurances are unacceptable. Instead, employers should ensure that their vendors contractually commit to ongoing bias testing and, where possible, take contractual responsibility if their tools produce discriminatory outcomes.
  • Auditing AI Tools for Bias: Bias audits (preferably under attorney-client privilege) are essential and should be performed in accordance with legal requirements to mitigate discrimination risks. The audits should look not just at outcomes by a protected group, but also at training data, “cutoff” scores, and how managers actually rely on the tools in practice.
  • Providing Proper Oversight and Training: Proper training for anyone using or relying on AI tools for HR functions is critical. HR professionals and managers need to understand the limitations of the tools, recognize potential red flags, understand when to provide reasonable accommodations, and know when and how human judgment must intervene rather than defer to automated outputs.

[View source.]

Written by:

MoFo Tech
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA

  • Increased readership
  • Actionable analytics
  • Ongoing writing guidance

Join more than 70,000 authors publishing their insights on JD Supra

Start Publishing »

MoFo Tech on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide