
Texas has officially joined the ranks of states implementing artificial intelligence (AI) regulations. On June 21, 2025, Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law, with an effective date of January 1, 2026. This legislation seeks to balance fostering AI innovation with establishing safeguards, particularly for governmental use of AI systems. However, the long-term impact of TRAIGA remains uncertain due to a proposed federal measure that could preempt state-level AI regulation.
Key Provisions of the Texas AI Law
TRAIGA’s scope encompasses both public and private sector AI applications , though its most stringent requirements are directed at state agencies. The law specifically prohibits AI use that could lead to discrimination, incite violence or self-harm, or infringe upon constitutional rights. Additionally, it restricts the use of biometric data without informed consent, a provision likely to be welcomed by privacy advocates.
Provisions Applicable to State Agencies:
- State agencies are mandated to inform individuals when they are interacting with an AI system.
- The law prohibits state agencies from using AI for purposes that would result in discrimination, promote violence or self-harm, or otherwise infringe constitutional rights.
- The use of biometric data by state agencies without informed consent is restricted.
Provisions Applicable to Private Entities:
- While TRAIGA is broad in scope, its “real teeth apply primarily to government agencies”.
- TRAIGA does not impose notification requirements on private companies when individuals are interacting with an AI system, a distinction from regulations in states like Colorado and California.
- Private companies are granted a 60-day cure period to fix violations before any penalties are incurred. This indicates a “light touch” regulatory approach for the private sector.
- The law touches on private sector AI use, but avoids “aggressive compliance mandates”.
- Enforcement solely rests with the Texas Attorney General , and there is no private right of action for violations.
What TRAIGA Seeks to Protect (applicable to both public and private sectors):
- Fundamental Rights and Freedoms: TRAIGA aims to safeguard individual rights guaranteed under the U.S. Constitution.
- Protection from Discrimination: The law prohibits AI use that would unlawfully discriminate against protected classes in violation of state or federal law.
- Prevention of Harm (Self-harm, Violence): TRAIGA prohibits AI systems designed to manipulate human behavior to incite physical self-harm, harm another person, or engage in criminal activity.
- Privacy and Biometric Data: The law restricts the use of biometric data without informed consent.
- Prevention of Unlawful Explicit Content and Deepfakes: TRAIGA prohibits the development or distribution of AI systems with the sole intent of producing, assisting, or distributing child pornography or unlawful deepfake videos or images. It also prohibits intentionally developing or distributing an AI system that engages in explicit text-based conversations while impersonating a child under 18.
Balancing Innovation and Oversight
One of TRAIGA’s progressive features is the establishment of a regulatory sandbox for AI companies. This initiative aims to facilitate the development of new technologies within a supervised, lower-risk environment. Coupled with an AI Advisory Council tasked with aligning policy with technological advancements, Texas’s approach emphasizes support and collaboration rather than prescriptive mandates.
Texas’s Stance in the National AI Regulatory Landscape
Compared to AI legislation enacted in California and Colorado, TRAIGA adopts a more minimalist, innovation-centric strategy:
- California’s AB 2013 focuses on transparency for generative AI developers, particularly regarding training data, without limiting use cases.
- Colorado’s AI Act specifically targets “high-risk” applications such as hiring, credit, and housing, mandating detailed impact assessments, risk management plans, and fairness audits.
In contrast, Texas has opted for a cautious approach, primarily regulating AI within the public sector and encouraging best practices in the private sector without imposing aggressive compliance obligations.
The Looming Federal Preemption
A critical factor impacting TRAIGA’s future is a provision in a recent federal budget proposal by House Republicans. This provision seeks to implement a ten-year moratorium on state and local AI regulations, unless such regulations are specifically designed to accelerate AI deployment. Should this federal measure pass, it could effectively supersede TRAIGA and similar state-level initiatives.
The Future Relevance of TRAIGA
The ultimate significance of TRAIGA hinges on the outcome of the proposed federal moratorium. If the federal preemption does not materialize, TRAIGA could serve as a potential model for other states seeking to implement AI governance with a focus on innovation. However, if Congress enacts the federal override, TRAIGA may become largely symbolic.
Nevertheless, the passage of TRAIGA underscores a growing trend among state legislatures to proactively shape AI policy in the absence of comprehensive federal guidance. The enduring impact of these state-level efforts will ultimately depend on both state-specific political dynamics and broader federal legislative developments.