HB Ad Slot
HB Mobile Ad Slot
Texas Enacts “Responsible AI Governance Act”
Tuesday, June 24, 2025

The Texas Responsible AI Governance Act (TRAIGA), signed into law on June 22, 2025, represents a significant evolution in state-level AI regulation. Originally conceived as a comprehensive risk-based framework modeled after the Colorado AI Act and EU AI Act, TRAIGA underwent substantial modifications during the legislative process, ultimately emerging as a more targeted approach that prioritizes specific prohibited uses while fostering innovation through regulatory flexibility. Set to take effect on January 1, 2026, TRAIGA marks Texas as a key player in the developing landscape of AI governance in the United States. TRAIGA’s evolution from comprehensive risk assessment to targeted prohibition also reflects deeper challenges in AI governance: how do we regulate technologies that can modify their own behavior faster than traditional oversight mechanisms can adapt?

From Sweeping Framework to Targeted Regulation. The original draft, introduced in December 2024, proposed an expansive regulatory scheme that would have imposed extensive obligations on developers and deployers of “high-risk” AI systems, including mandatory impact assessments, detailed documentation requirements, and comprehensive consumer disclosures. The final version, following stakeholder feedback and legislative debate, represents a significant shift from process-heavy compliance requirements to outcome-focused restrictions. Rather than creating broad categories of regulated AI systems, the enacted version attempts to strike a balance of preventing specific harms while maintaining Texas’s business-friendly environment.

Core Prohibitions. TRAIGA enacts a prohibited uses framework that prohibits AI systems designed or deployed for:

1. Behavioral Manipulation: Systems that intentionally encourage physical harm or criminal activity.

2. Constitutional Infringement: AI developed with the sole intent of restricting federal Constitutional rights.

3. Unlawful Discrimination: Systems intentionally designed to discriminate against protected classes.

4. Harmful Content Creation: AI for producing child pornography, unlawful deepfakes, or impersonating minors in explicit conversations.

Notably, the Act requires intent as a key element for liability. This intent-based standard provides important protection for developers whose systems might be misused by third parties, while still holding bad actors accountable. The provision clarifying that “disparate impact” alone is insufficient to demonstrate discriminatory intent aligns with recent federal policy directions and provides clarity for businesses navigating compliance. While this intent-based framework addresses obvious harmful uses, it leaves open more complex questions about AI systems that influence human decision-making through design choices that fall below the threshold of conscious intent — systems that shape choice environments without explicitly intending to manipulate. Companies should consider how their AI systems structure user choice environments, even when not explicitly designed to influence behavior.

The Sandbox Program. TRAIGA implements a regulatory sandbox program administered by the Department of Information Resources. This 36-month testing environment allows AI developers to experiment with innovative applications while temporarily exempt from certain regulatory requirements. Participants must submit quarterly reports detailing system performance, risk mitigation measures, and stakeholder feedback.

Enforcement. TRAIGA vests enforcement authority exclusively with the Texas Attorney General, avoiding the complexity of multiple enforcement bodies or private rights of action. The enforcement framework includes several features designed to incentivize proactive compliance and self-correction while providing meaningful deterrents for intentional bad actors:

  • 60-day cure period for violations before enforcement actions can proceed
  • Affirmative defenses for companies that discover violations through internal processes, testing, or compliance with recognized frameworks like NIST's AI Risk Management Framework (RMF)
  • Scaled penalties ranging from $10,000 – $12,000 for curable violations to $80,000 – $200,000 for uncurable ones

The Texas AI Council. TRAIGA establishes the Texas Artificial Intelligence Advisory Council, which is explicitly prohibited from promulgating binding rules, and instead focuses on:

  • Conducting AI training for government entities
  • Issuing advisory reports on AI ethics, privacy, and compliance
  • Making recommendations for future legislation
  • Overseeing the sandbox program

Implications for Businesses. For companies operating in Texas, TRAIGA offers both clarity and flexibility. It’s focus on intentional harmful uses rather than broad categories of “high-risk” systems reduces compliance uncertainty. Key considerations for businesses include:

1. Intent Documentation: Companies should maintain clear records of their AI systems’ intended purposes and uses 

2. Testing Protocols: Implementing robust testing procedures can provide affirmative defenses 

3. Framework Adoption: Compliance with recognized frameworks like NIST’s AI RMF offers legal protection 

4. Sandbox Opportunities: Innovative applications can benefit from the regulatory flexibility offered by the sandbox program

National and Policy Implications. TRAIGA’s passage positions Texas as a significant voice in the national AI governance conversation. Its pragmatic approach contrasts with more prescriptive frameworks proposed elsewhere, potentially offering a model for AI regulation that prioritizes innovation while addressing concrete harms. However, TRAIGA also contributes to the growing patchwork of state AI laws, raising questions about regulatory fragmentation. With Colorado, California, Utah, and now Texas each taking different approaches to more comprehensive AI regulation, businesses face an increasingly complex compliance landscape. This fragmentation may accelerate calls for federal preemption or a unified national framework.

Conclusion

The Texas Responsible AI Governance Act charts a distinctive course in AI governance by focusing on specific prohibited uses rather than comprehensive risk assessments. However, TRAIGA’s effectiveness will ultimately depend on how well traditional regulatory frameworks can adapt to technologies that operate at machine speed while making decisions that fundamentally affect human agency and choice. While the act addresses intentional harmful uses, the more challenging questions may involve AI systems that influence decision-making environments in ways that fall outside traditional regulatory categories. As other states and the federal government continue to grapple with AI regulation, the Texas model offers valuable lessons — and reveals limitations — of applying legal frameworks to technologies that challenge basic assumptions about agency, intent, and choice.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters