State Enforcement in the Wake of Trump Executive Order Targeting State Regulation of AI

Troutman Pepper Locke

On December 11, President Donald Trump signed an executive order (EO) that establishes a national artificial intelligence (AI) regulatory framework and attempts to preempt enforcement of state AI laws. Titled “Ensuring a National Policy Framework for Artificial Intelligence,” the EO states that “[i]t is the policy of the United States to sustain and enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI.” This latest effort follows bipartisan opposition in Congress and among state attorneys general (AGs) to previous legislative attempts this year to supersede state AI laws. While the order seeks to minimize a burdensome AI regulatory patchwork, compliance will remain complex given various state enforcement tools.

In accordance with the EO, the White House will prepare a recommendation to Congress to establish a “uniform Federal policy framework for AI that preempts State AI laws that conflict with the policy set forth in this order.” However, state AI laws relating to child safety protections, data center infrastructure (excluding generally applicable permitting reforms), state government AI procurement and use, and other topics “to be determined,” will not be preempted by the legislative recommendation.

Further, in January, the AG will establish an AI Litigation Task Force with the sole purpose of challenging state AI laws pursuant to the Commerce Clause and Supremacy Clause deemed to be inconsistent with the policy of the U.S. “to sustain and enhance the United States’ global AI dominance.” The secretary of commerce will publish an evaluation of existing state AI legislation by March to identify onerous laws conflicting with the policy, and will refer such laws to the AI Litigation Task Force. The secretary will also deem states with burdensome AI laws ineligible for Broadband Equity, Access, and Deployment (BEAD) funds.

The Federal Communications Commission (FCC) chair is also tasked with initiating a proceeding to determine whether to adopt a federal reporting and disclosure standard for AI models that will preempt existing state laws. The FCC must issue a policy statement applying the Federal Trade Commission (FTC) Act’s prohibition on unfair and deceptive acts or practices to AI models, specifically clarifying that state laws that require the alteration of outputs to comport with pre-determined values or ranges are preempted by the act’s prohibition on deceptive practices.

This development follows two other congressional legislative attempts to curtail state AI enforcement this year. In early 2025, the Trump administration introduced its “Big Beautiful Bill” that, among other things, imposed a 10-year moratorium on state laws that limit, restrict, or regulate AI systems. In response, a bipartisan coalition of 40 AGs sent a letter to Congress expressing strong opposition and arguing the provision violates state sovereignty and impedes their consumer protection duties. Congressional leaders ultimately removed the provision in the wake of additional congressional bipartisan opposition. Congress attempted to insert a similar provision in the National Defense Authorization Act of 2026, which was also ultimately stricken in the face of bipartisan opposition.

State AGs are similarly expected to oppose and challenge the EO. While only four states, California, Colorado, Texas, and Utah, have passed AI-specific governance legislation, state AGs are using myriad tools and tactics to address perceived deficiencies in the AI regulatory scheme, advance public safety, and impose guardrails. Over the past two years, several state AGs, including Oregon, Massachusetts, New Jersey, and New York, have warned that they will enforce consumer protection, privacy, anti-discrimination, and housing laws related to the development and deployment of AI solutions. By way of example, the Massachusetts AG announced a $2.5 million settlement in July with a Delaware‑based student loan company to resolve allegations that the company’s lending practices — including the use of AI models — violated consumer protection and fair lending laws. In another example, the Pennsylvania arm of a Las Vegas‑based rental management company paid the state of Pennsylvania to settle allegations that its AI platform contributed to delays in repairs and rentals of unsafe housing last May.

In November, North Carolina AG Jeff Jackson and Utah AG Derek Brown, along with the Attorney General Alliance, announced a task force with generative‑AI developers — including OpenAI and Microsoft — to identify and develop consumer safeguards within AI systems as these technologies proliferate. The task force creates a mechanism for state AGs to work with technology companies, law enforcement, and AI experts to better insulate the public from AI risks as new systems come online, with a particular focus on child safety.

On December 9, the National Association of Attorneys General sent a bipartisan letter co-signed by 42 state AGs to several AI industry leaders expressing concern for “sycophantic and delusional” AI outputs. AI is “sycophantic” when it single-mindedly pursues human approval, and “delusional” when it provides an output that is either false or likely to mislead the user. The letter also highlighted reports of disturbing and dangerous AI interactions with children, including the suicides of two teenagers this year after interactions with AI chatbots. The AGs listed additional safeguards developers should implement to mitigate such risks.

Why It Matters

The Trump administration’s efforts to minimize the regulatory burden associated with the development, adoption, and deployment of AI solutions will increase the speed of development of AI innovation and growth. But minimal federal regulatory oversight does not mean that developers and deployers can ignore their obligations under other existing laws. In other words, it does not matter whether a company uses AI when engaging in conduct that is otherwise illegal under existing consumer protection laws. AI developers and deployers ignore AI agnostic laws in every state at their own peril.

Organizations utilizing AI must therefore generally ensure that they have implemented and documented important consumer safeguards, such as risk impact assessments and risk management systems for example, and that they can effectively accommodate data-subject requests, among many other considerations. Engaging relevant internal stakeholders and consulting experienced outside counsel will help navigate these opportunities and minimize regulatory exposure within an ever-changing AI landscape.

Written by:

Troutman Pepper Locke
Contact
more
less

What do you want from legal thought leadership?

Please take our short survey – your perspective helps to shape how firms create relevant, useful content that addresses your needs:

Troutman Pepper Locke on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide