White House Launches National Framework Seeking To Preempt State AI Regulation

Skadden Publication / AI Insights

David A. Simon Stuart D. Levi William E. Ridgway Ken D. Kumayama Don L. Vieira Brian J. Egan Shay Dvoretzky Joshua Silverstein Pramode Chiruvolu Lisa V. Zivkovic Dana E. Holmstrand Matthew Urfirer

Executive Summary

  • What’s new: On December 11, 2025, President Trump issued an executive order (EO) announcing a national policy to establish a “minimally burdensome” national standard for AI and directing the Department of Justice to challenge state laws deemed inconsistent with that goal.
  • Why it matters: The EO advances the Trump administration’s long-stated goal to curb state-level initiatives regulating AI. The directives in the EO raise complex constitutional issues that will likely result in prolonged litigation and uncertainty for market participants.
  • What to do next: Businesses should closely monitor the implementation of this new federal AI framework, including the activities of the AI Litigation Task Force, forthcoming evaluations of state AI laws, and the development of federal reporting and disclosure standards.

__________

On December 11, 2025, President Donald Trump signed an executive order (EO), “Establishing a National Framework for Artificial Intelligence,” meant to create a “minimally burdensome” unified regulatory approach to sustain and enhance U.S. global AI dominance. The EO cites concerns that a “patchwork” of state regulations will stifle innovation, increase compliance costs, and require entities to “embed ideological bias within [AI] models.” The EO directs certain U.S. government departments and agencies to limit the impact of state AI laws by exploring options to (i) challenge such state AI laws through litigation, (ii) condition certain federal funding to states based on whether they have “onerous” AI laws and (iii) create a federal reporting standard for AI models that preempt conflicting state laws.

However, the EO’s reliance on federal agencies to set standards that seek to preempt state law raises questions about whether there is sufficient statutory authority for such preemption, and whether the executive branch is overstepping its constitutional role.

The EO follows, in part, from the January 2025 EO “Removing Barriers to American Leadership in Artificial Intelligence” and the July 2025 policy blueprint “Winning the AI Race: America’s AI Action Plan” (AI Policy Blueprint), which called on the federal government not to direct AI-related funding to states with “burdensome” AI regulations. The EO was issued after attempts to include a similar framework failed to gain congressional support in recent legislative packages, including the December 10, 2025, National Defense Authorization Act of 2026. The EO was released two days after a bipartisan coalition of 42 state attorneys general sent a letter to the major AI companies urging improved safeguards for children and mitigation of harmful model outputs, highlighting state regulators’ continued interest in overseeing and regulating AI.

Key Points in the EO

The EO is a statement of policy that directs federal departments and agencies to challenge state laws that present “onerous” regulatory burdens and to develop federal standards to create a more uniform national regulatory regime governing AI. The EO directs the following actions:

AI Litigation Task Force. Within 30 days, the U.S. attorney general is directed to establish an AI Litigation Task Force to challenge state AI laws that are inconsistent with the EO’s stated federal AI policy. The task force’s mandate is to focus on state laws that are deemed to unconstitutionally regulate interstate commerce, are preempted by existing federal regulations or are otherwise unlawful according to the attorney general’s judgment, including those that are identified as “burdensome” by the state law “evaluation” process described below. For example, pursuant to the EO, the AI Litigation Task Force might seek to challenge state laws that prohibit biased results on First Amendment grounds that the outputs of AI models are protected speech. The task force will coordinate with senior White House advisers, including the special adviser for AI and crypto, to identify existing and emerging state laws that could be challenged in court.

Evaluation of state AI laws. The secretary of commerce, in consultation with White House advisers, including the special adviser for AI and crypto, must publish an evaluation of existing state AI laws within 90 days of the EO. This evaluation must identify “burdensome” laws that conflict with the national AI policy established by the EO. In particular, the evaluation should focus on laws that require AI models to alter truthful outputs or mandate disclosures that may violate First Amendment or other constitutional protections. In addition, the secretary of commerce is directed to identify state laws that could be referred to the AI Litigation Task Force for further review and potential litigation.

Restrictions on state funding. The EO directs federal departments and agencies to take steps to condition states’ eligibility for certain federal funds on whether they have enacted laws that conflict with the EO’s stated policies. Specifically, the Department of Commerce is directed to issue a policy notice that makes states identified as having “onerous” AI laws as part of the evaluation process noted above ineligible for “nondeployment” funds associated with the Broadband Equity Access and Deployment (BEAD) Program. BEAD was likely singled out for inclusion because AI systems rely heavily on high-speed broadband networks, and the administration believes that withholding nondeployment funds would impose maximum pressure on states. Nondeployment funds refer to those funds remaining in the BEAD program after states meet their primary infrastructure deployment goals. The EO further instructs all other federal departments and agencies, in consultation with White House officials, to evaluate whether discretionary grant programs may be conditioned on states either: (i) not enacting AI laws that conflict with the EO’s stated policies or (ii) agreeing not to enforce those laws during the period in which they receive the discretionary funding. We expect that the conditioning of federal funds contemplated here will be subject to constitutional scrutiny under the Spending Clause in Article I, Section 8, Clause 1.

Federal standards for AI reporting. The Federal Communications Commission is directed to initiate, in consultation with the special adviser for AI and crypto, proceedings to determine whether to adopt a federal reporting and disclosure standard for AI models that would seek to preempt conflicting state regulations. This initiative must occur within 90 days of the Commerce Department’s evaluation of state AI regulations.

Preemption of “deceptive conduct” in state AI regulations. The Federal Trade Commission (FTC), in consultation with the special adviser for AI and crypto, is tasked with issuing a policy statement within 90 days that explains how the FTC Act’s prohibition of unfair or deceptive acts or practices applies to AI models. In particular, the policy statement must outline circumstances under which the FTC Act’s prohibitions may be asserted as a basis for preemption of certain state laws in litigation.

Call for federal AI legislation. The EO calls on the special adviser for AI and crypto and the assistant to the president for science and technology to prepare legislative recommendations to establish a uniform federal AI policy framework that would be intended to preempt state laws that conflict with the policy outlined in the EO. The EO states that the proposed legislation should not seek to preempt state laws regulating child safety, AI compute and data center infrastructure, and state procurement and use of AI, among other areas to be determined.

Respect for copyright ownership. The EO also contains a passing, but important, reference to the rights of copyright owners. In describing a federal AI framework, the EO states: “That framework should also ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.” (Emphasis added.) This is likely a reference to the use of copyrighted materials to train AI models, and seems to stand in contrast to President Trump’s remark, surrounding the release of the AI Policy Blueprint, that it would be impossible to have a successful AI program “when every single article, book or anything else that you’ve read or studied, you’re supposed to pay for.” Whether this signals a change in the administration’s position on the use of copyrighted works to train AI models remains to be seen.

Looking Ahead

The Trump administration’s EO is likely to increase uncertainty for states with enacted or pending AI regulations, as legal challenges make their way through the courts.

While the EO directs officials to develop and enforce a more uniform federal AI regulatory standard, it does not by itself suspend or invalidate existing state AI laws, create binding federal compliance obligations or safe harbors, provide an immunity or defense for companies currently subject to state enforcement, or establish a federal AI licensing, certification or risk-classification framework. While monitoring the implementation of the EO, including any legislative recommendations for a federal AI regulatory framework, businesses should continue to comply with existing state law — and should not assume that enforcement of state AI laws will pause as a result of the EO.

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP