Texas Charts New Path on AI With Landmark Regulation

Skadden Publication / AI Insights

Maria Cruz Melendez Stuart D. Levi William E. Ridgway Brittany E. Libson

Texas has become the second state, after Colorado, to enact omnibus legislation regulating artificial intelligence (AI) systems. On June 22, 2025, Texas Gov. Greg Abbott signed into law the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which will go into effect on January 1, 2026. The Act establishes a new regulatory framework that applies to developers and deployers of AI systems conducting business in Texas or producing AI products or services used by Texas residents.

The passage of the Texas law is noteworthy given that in recent months, Gov. Gavin Newsom of California and Gov. Glenn Youngkin of Virginia each vetoed their respective state’s omnibus AI laws. It remains to be seen whether other states follow Texas’ and Colorado’s lead, or avoid omnibus AI laws and focus instead on regulating specific activities, such as the use of deepfakes. The passage of the Texas law also comes as Congress is considering the enactment of a 10-year moratorium on most state AI laws.

Key Points of TRAIGA

  • The main focus of TRAIGA is to prohibit intentional discriminatory, manipulative and other harmful uses of AI systems, although several of its prohibitions apply only to government entities or to conduct that is otherwise illegal even absent the use of AI.
  • TRAIGA imposes certain disclosure requirements on the use of AI by government agencies.
  • TRAIGA also amends existing consent requirements for harvesting biometric data from online media for commercial purposes and creates exceptions to the requirements including for training most AI systems.
  • TRAIGA specifies categories of information that the attorney general may request on AI systems and provides guidance on the types of records developers and deployers should maintain to satisfy those requests.
  • TRAIGA does not provide for a private right of action. Enforcement is limited to the state attorney general.
  • The Act establishes a new advisory body, the Texas Artificial Intelligence Council, to study and oversee AI systems operating in Texas, issue recommendations to the legislature and consider using AI in government operations.

Overview

TRAIGA broadly applies to developers and deployers of any “artificial intelligence system,” defined as any machine-based system that “infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.” This is a broad definition that extends beyond so-called generative AI, which is limited to the generation of content.

The Act prohibits the development or deployment of an AI system:

  • In a manner that intentionally aims to incite or encourage harm to another person, physical self-harm, or crime.
  • With the sole intent to violate or impair an individual’s rights under the U.S. Constitution.
  • With the intent to unlawfully discriminate based on race, sex, religion or other characteristics protected under federal or state law (with exceptions for federally insured financial institutions and regulated insurance entities that are already subject to antidiscrimination laws).
  • With the sole intent of producing, aiding in producing or distributing child pornography, nonconsensual deepfakes, or sexually explicit chats impersonating a child.

Many of these prohibited actions, such as producing or distributing child pornography, are illegal in Texas even absent the use of AI.

Government agencies are also prohibited from using AI:

  • For social scoring that may infringe individuals’ rights or result in other detrimental or unfavorable consequences to individuals.
  • To identify individuals using biometric data or images or other media captured from the internet without their consent, if gathering the data would infringe on the individuals’ constitutional or other rights under federal or Texas laws.

In addition, TRAIGA requires that government agencies disclose when they use AI systems to interact with consumers.

Importantly, TRAIGA also amends the Texas Capture or Use of Biometric Identifier Act (CUBI), which concerns the commercial use of biometric data, to clarify that individuals do not implicitly consent to the harvesting of their biometric data based solely on the existence of online images or videos unless they personally posted them. TRAIGA also creates several exemptions from the consent requirements, including for financial institutions using voiceprint data, for training AI systems to be used for purposes other than identifying individuals, and for fraud detection and other similar security measures.

Enforcement by the Attorney General

The attorney general has exclusive authority to enforce TRAIGA and may bring an enforcement action to seek civil penalties and injunctive relief, and to recover attorney’s fees, court costs and investigative expenses.

Information Requests by the Attorney General

TRAIGA requires the attorney general to create an online mechanism through which consumers may submit complaints alleging violations of the Act and grants the attorney general the authority to issue civil investigative demands to investigate such complaints. The Act specifies that the attorney general may request the following information regarding an AI system from either developers or deployers of such systems:

  • A high-level description of its purpose, intended use, deployment context and associated benefits.
  • A description of its programming or training data.
  • High-level descriptions of data processed as inputs and outputs produced.
  • Any metrics used to evaluate its performance.
  • Any known limitations of the system.
  • A high-level description of the post-deployment monitoring and safeguards used for the system.
  • Any other documents reasonably necessary for the attorney general’s investigation.

The Act does not grant the attorney general the authority to investigate potential violations for which consumers have not submitted complaints. However, the attorney general can still bring enforcement actions for violations of the Act discovered through investigations authorized under other state laws.

Safe Harbors From Enforcement

TRAIGA establishes several safe harbors from enforcement actions, including that it:

  • Requires the attorney general to provide written notice before initiating an action, giving the company 60 days to cure the violation and implement any policy changes necessary to reasonably prevent further violations.
  • Creates an affirmative defense for developers and deployers that discover violations through feedback from others, testing, following applicable state agency guidelines, or substantially complying with the latest version of the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” (the AI RMF GenAI Profile) published by the National Institute of Standards and Technology (NIST) or a similar AI risk management framework.
  • Shields developers from liability if another person uses their AI system in a manner prohibited by the Act.

A company that fails to cure a violation or fall within one of the safe harbors may be liable for civil penalties ranging from $10,000 to $12,000 for each curable violation, $80,000 to $200,000 for each uncurable violation and $2,000 to $40,000 for each day that a violation continues.

Following an enforcement action by the attorney general, other state agencies may impose additional sanctions on licensed, registered or certified persons, including fines up to $100,000 and suspending or revoking their authorization to conduct business activities.

AI Sandbox Program

TRAIGA requires the Texas Department of Information Resources to create a regulatory sandbox program, which will allow program participants to test AI systems for a limited time under state supervision without complying with certain regulatory requirements.

What Should Companies Be Doing?

Notably, TRAIGA gives the Texas attorney general a new remedial mechanism to enforce existing laws, target manipulative AI systems, and seek financial penalties for unlawful discrimination and other consumer protection violations. Companies should consider preparing for the implementation of TRAIGA on January 1, 2026, by assessing how their development or use of AI systems, as well as their practices for harvesting biometric data, might implicate TRAIGA’s provisions, including:

  • Documenting the intended purposes of AI systems and any guardrails in place to prevent unlawful discrimination or outputs that may incite or encourage harm to others, physical self-harm or crime.
  • Formally adopting and documenting substantial compliance with the NIST AI RMF GenAI Profile or another nationally or internationally recognized risk management framework for AI systems, to mitigate enforcement risk.
  • Monitoring and ensuring compliance with guidelines on AI systems set by applicable state agencies.
  • Assessing whether recordkeeping procedures should be updated to track the specified categories of information the attorney general may request in an enforcement proceeding and implementing any appropriate updates.
  • Assessing whether existing practices and procedures for harvesting biometric data comply with TRAIGA’s revisions to CUBI’s consent provisions and implementing any necessary changes to ensure compliance.

The attorney general’s first enforcement actions will set the tone for how aggressively the state approaches AI regulation. Meanwhile, businesses should watch for further developments at the federal level that could interfere with Texas’ efforts to enforce its new law and reshape the AI legal landscape.

This memorandum is provided by Skadden, Arps, Slate, Meagher & Flom LLP and its affiliates for educational and informational purposes only and is not intended and should not be construed as legal advice. This memorandum is considered advertising under applicable state laws.

BACK TO TOP