AI Trends For 2026 - How States Will Shape AI Enforcement

MoFo Tech
Contact

MoFo Tech

As federal momentum toward a comprehensive U.S. AI law remains stalled, state regulators are stepping decisively into the gap. Heading into 2026, state attorneys general are likely to play an increasingly central role in shaping AI governance, not by waiting for new statutes, but by actively enforcing existing consumer privacy and AI-related laws. Two trends stand out: the use of profiling restrictions as a de facto AI enforcement mechanism and the continued expansion of a state-by-state AI regulatory patchwork.

Modern state privacy laws already provide regulators with a powerful hook. Many include limits on “profiling,” often defined as automated decision making and sometimes limited to solely automated processes, particularly where those activities produce legal or similarly significant effects on individuals. In practice, these provisions give state attorneys general a ready-made framework to scrutinize high-risk AI systems. Enforcement actions are likely to first focus on familiar compliance failures: inadequate or unclear notices, missing or inoperative opt-out mechanisms, discriminatory or biased outcomes, and ineffective or illusory appeals processes.

Rather than regulating AI specifically, state regulators can frame these cases as failures of consumer protection and privacy compliance. That approach allows state attorneys general to challenge algorithmic decision making without needing to litigate the technical design or performance of the AI models.

At the same time, the broader legislative landscape remains fragmented. There is still no realistic prospect of an omnibus federal AI or privacy statute in the near term. In response, states will continue proposing and enacting their own privacy and AI laws, but with a noticeable shift in emphasis. Following the December executive order signaling potential federal resistance to certain state AI regulatory approaches, lawmakers are likely to focus on areas viewed as less vulnerable to preemption or legal challenge, such as child safety protections.

For organizations operating across multiple states, the fragmented legislative landscape creates a familiar challenge. The patchwork will persist, and compliance will require careful mapping of AI use cases against overlapping privacy, consumer protection, and AI‑specific requirements. Enforcement risk will increasingly turn on whether companies can demonstrate that they identified high-risk uses, assessed potential impacts, implemented meaningful safeguards, and provided consumers with clear disclosures and workable remedies.

Looking ahead to 2026, companies should expect state attorneys general to be among the most active AI regulators in the United States. The absence of federal legislation has not produced regulatory silence. Instead, states will continue to adapt existing tools and enact targeted measures to shape AI deployment. Organizations that treat profiling restrictions, transparency obligations, and appeals mechanisms as core components of AI governance will be better positioned to manage enforcement risk in an increasingly state-driven regulatory environment.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© MoFo Tech

Written by:

MoFo Tech
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

MoFo Tech on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide