HB Ad Slot
HB Mobile Ad Slot
Lone Star AI: How Texas Is Pioneering President Trump’s Tech Agenda
Thursday, July 10, 2025

On June 22, 2025, Texas Governor Greg Abbott signed into the law the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) or (the Act).

The Act, which goes into effect January 1, 2026, “seeks to protect public safety, individual rights, and privacy while encouraging the safe advancement of AI technology in Texas.”

Formerly known as HB 149, the Act requires a government agency to disclose to consumers that they are interacting with AI—no matter how obvious this might appear—through plain language, clear and conspicuous wording requirements, and more. The same disclosure requirement also applies to providers of health care services or treatment, when the service or treatment is first provided or, in cases of emergency, as soon as reasonably possible.

The Act further prohibits the development or deployment of AI systems intended for behavioral manipulation, including AI intended to encourage people to harm themselves, harm others, or engage in criminal activity (see a post by our colleagues on Utah’s regulation of mental health chatbots).

TRAIGA forbids, under certain conditions, the governmental use and deployment of AI to evaluate natural persons based on social behavior or personal characteristics (social scoring); and the governmental development/deployment of AI systems for the purpose of uniquely identifying individuals using biometric data, under certain conditions. Notably and broadly, the law prohibits the development or deployment of AI systems by “a person”

  • with the sole intent of producing or distributing child pornography, unlawful deepfake videos or images, certain sexually explicit content, etc.,
  • with the intent to unlawfully discriminate against a protected class in violation of state or federal law; and
  • with the sole intent of infringing on constitutional rights.

This broad coverage would per force include employers and other organizations using AI tools or systems in both the public and private sectors.

Legislative History of TRAIGA

The original draft of TRAIGA (Original Bill), introduced in December 2024 by State Representative Giovanni Capriglione, was on track to be the nation’s most comprehensive piece of AI legislation. The Original Bill was modeled after the Colorado AI Act and the EU AI Act, focusing on “high-risk” AI systems (see our colleagues’ blog post on Colorado’s historic law). Texas would have imposed significant requirements on developers and deployers of AI systems, including duties to protect consumers from foreseeable harm, conduct impact assessments, and disclose details of high-risk AI to consumers.

In response to feedback and the impact of the Trump administration’s push for innovation—along with a loosening of regulation— Representative Capriglione and the Texas legislature introduced a significantly pared back version of TRAIGA, known as HB 149, in March 2025. HB 149 was passed by the Texas House of Representatives in April and by the Texas State Senate in May, before Governor Abbott signed it into law in June 2025.

Current Version

The Act no longer mentions high-risk AI systems. The Act focuses primarily on AI systems developed or deployed by government entities though, as noted above, some disclosure requirements apply to health care entities and some prohibitions remain as to developers and deployers.

Unlike the Original Bill, the Act does not require private entities to conduct impact assessments, implement risk management policies, or disclose to consumers when they are interacting with AI. The Act also restricts its prohibition of social scoring to government entities. The Act explicitly states that disparate impact is not enough to impose liability for unlawful discrimination against individuals in state or federal protected classes. The latter provision clearly stems from Trump policy goals discouraging, if not prohibiting, disparate impact as an indicator of illicit discrimination (see our April Insight on this topic).

The Act establishes an AI Advisory Council, composed of seven members appointed by the Governor, Lieutenant Governor, and Speaker of the House. The Council will assist the state legislature and state agencies by identifying and recommending AI policy and legal reforms. It will also conduct AI training programs for state agencies and local governments. The Council is explicitly prohibited, however, from promulgating binding rules and regulations itself.

The Act vests sole enforcement authority with the Texas Attorney General (AG), except to the extent that state agencies may impose sanctions under certain conditions if recommended by the AG. The Act explicitly provides no private right of action for individuals. Under the Act, the AG is required to develop a reporting mechanism for consumer complaints of potential violations. The AG may then issue a civil investigative demand to request information, including requesting a detailed description of the AI system.

After receiving notice of the violation from the AG, a party has 60 days to cure, after which the AG may bring legal action and seek civil penalties for uncured violations. Curable violations are subject to a fine of $10,000 to $12,000 per violation. Uncurable violations are subject to a fine of $80,000 to $200,000 per violation. Continuing violations are subject to a fine of $40,000 per day. The Act also gives state agencies the authority to sanction parties licensed by that agency by revoking or suspending their licenses, or by imposing monetary penalties of up to $100,000.

AI Regulatory Sandbox Program Under TRAIGA

Perhaps most notably, the final version of TRAIGA establishes a “regulatory sandbox” exception program (the “Program”) to encourage AI innovation. The Program will be administered by the Texas Department of Information Resources (DIR) and is designed to support the testing and development of AI systems under relaxed regulatory constraints.

Program applicants must provide a detailed description of the AI system, including

  • the benefits and impacts the AI system will have on consumers, privacy, and public safety;
  • mitigation plans in case of adverse consequences during testing; and
  • proof of compliance with federal AI laws and regulations.

Participants must submit quarterly reports to DIR, which DIR will use to submit annual reports to the Texas legislature with recommendations for future legislation. Quarterly reports will include performance metrics, updates on how the AI system mitigates risk, and feedback from consumers and stakeholders. Participants will have 36 months to test and develop their AI systems, during which time the Texas AG cannot file charges and state agencies cannot pursue punitive action for violating the state laws and regulations waived under TRAIGA.

TRAIGA is neither the first nor only AI legislation to establish a regulatory sandbox program—described by a 2023 report of the Organisation for Economic Co-operation and Development (OECD) as where “authorities engage firms to test innovated products or services that challenge existing legal frameworks” and where “participating firms obtain a waiver from specific legal provisions or compliance processes to innovate.” Regulatory sandboxes in fact existed before the widespread application of AI systems; the term is widely credited to the UK Financial Conduct Authority (FCA), which introduced the concept as part of its “Project Innovate” in 2014 to encourage innovation in the fintech sector. Project Innovate’s regulatory sandbox launched in 2016 to create a controlled environment for businesses to test new financial products and services.

Regarding AI, Article 57 of the European Union’s AI Act mandates that member states must establish at least one AI regulatory sandbox at the national level, which must be operational by August 2, 2026. This Article also explains the purpose and goal for regulatory sandboxes: to provide a controlled environment to foster innovation and facilitate the development, training, testing, and validation of AI systems, before they are put on the market or into service.

Pending AI bills in several other US states would, if enacted, establish their own AI regulatory sandboxes. Connecticut has a bill (CTSB 2) that, if enacted, would establish various requirements concerning AI systems, including establishing an AI regulatory sandbox program. The Bill passed the State Senate on May 14, 2025, and is currently with the House.

Delaware’s House Joint Resolution 7 would, if enacted, direct an Artificial Intelligence Commission to work in collaboration with the Secretary of State to create a regulatory sandbox framework. The bill recognizes that “other states and nations are using regulatory sandboxes, frameworks set up by regulators in which companies are exempt from the legal risk of certain regulations under the supervision of regulators, to test innovate and novel products, services, and technologies.” HJR 7 passed in both the House and Senate and is ready for Governor action.

Oklahoma’s bill (HB 1916) was introduced on February 3, 2025. The bill calls for a new law to be codified, the Responsible Deployment of AI Systems Act. The Act would establish an AI Council to, among other things, oversee the newly established AI Regulatory Sandbox Program, which will “provide a controlled environment for deployers to test innovative AI systems when ensuring compliance with ethical and safety standards.”

Future Developments

Texas enacted TRAIGA amid the backdrop of a proposed 10-year federal moratorium on state government’s abilities to enact and enforce legislation regulating some applications of AI systems or automated decision systems. The proposed moratorium was part of President Trump’s comprehensive domestic policy bill, referred to as the “big, beautiful bill.” However, on July 1, 2025, the U.S. Senate voted nearly unanimously—99 to 1—in favor of removing the moratorium from the bill before it passed later that day.

Some predict its return, at least in some form. For now, the White House’s AI Action Plan, slated for release in July 2025, should put federal-level AI right back in the headlines. Executive Order 14179 of January 23, 2025, “Removing Barriers to American Leadership in Artificial Intelligence,” called for the submission of such a plan within 180 days—to be developed by the assistant to the president for Science and Technology (APST), the special advisor for AI and Crypto, the assistant to the president for National Security Affairs (APNSA), and more. In February, the White House issued a Request for Information (RFI) seeking public comment on policy ideas for the AI Action Plan, designed to “define priority policy actions to enhance America’s position as an AI powerhouse and prevent unnecessarily burdensome requirements from hindering private sector innovation.” By late April, the Office of Science and Technology Policy (OSTP) reported that more than 10,000 public comments had been received from interested parties including academia, industry groups, private sector organizations, and state, local, and tribal governments.

We expect to have lots on the AI front to report for our readers during the second half of 2025.

Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot

More from Epstein Becker & Green, P.C.

HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters