On 23 December 2024, Texas State Representative Giovanni Capriglione (R-Tarrant County) filed the Texas Responsible AI Governance Act (the Act),1 adding Texas to the list of states seeking to regulate artificial intelligence (AI) in the absence of federal law. The Act establishes obligations for developers, deployers, and distributors of certain AI systems in Texas. While the Act covers a variety of areas, this alert focuses on the Act’s potential impact on employers.2
The Act’s Regulation of Employers as Deployers of High-Risk Intelligence Systems
The Act seeks to regulate employers’ and other deployers’ use of “high-risk artificial intelligence systems” in Texas. High-risk intelligence systems include AI tools that make or are a contributing factor in “consequential decisions.”3 In the employment space, this could include hiring, performance, compensation, discipline, and termination decisions.4 The Act does not cover several common intelligence systems, such as technology intended to detect decision-making patterns, anti-malware and antivirus programs, and calculators.
Under the Act, covered employers would have a general duty to use reasonable care to prevent algorithmic discrimination—including a duty to withdraw, disable, or recall noncompliant high-risk AI systems. To satisfy this duty, the Act requires covered employers and other covered deployers to do the following:
Human Oversight
Ensure human oversight of high-risk AI systems by persons with adequate competence, training, authority, and organizational support to oversee consequential decisions made by the system.5
Prompt Reporting of Discrimination Risks
Report discrimination risks promptly by notifying the Artificial Intelligence Council (which would be established under the Act) no later than 10 days after the date the deployer learns of such issues.6
Regular AI Tool Assessments
Assess high-risk AI systems regularly, including conducting a review on an annual basis, to ensure that the system is not causing algorithmic discrimination.7
Prompt Suspension
If a deployer considers or has reason to consider that a system does not comply with the Act’s requirements, suspend use of the system and notify the system’s developer of such concerns.8
Frequent Impact Assessments
Complete an impact assessment on a semi-annual basis and within 90 days after any intentional or substantial modification of the system.9
Clear Disclosure of AI Use
Before or at the time of interaction, disclose to any Texas-based individual:
- That they are interacting with an AI system.
- The purpose of the system.
- That the system may or will make a consequential decision affecting them.
- The nature of any consequential decision in which the system is or may be a contributing factor.
- The factors used in making any consequential decisions.
- Contact information of the deployer.
- A description of the system. 10
Takeaways for Employers
The Act is likely to be a main topic of discussion in Texas’s upcoming legislative session, which is scheduled to begin on 14 January 2025. If enacted, the Act would establish a consumer protection-focused framework for AI regulation. Employers should track the Act’s progress and any amendments to the proposed bill while also taking steps to prepare for the Act’s passage. For example, employers using or seeking to use high-risk AI systems in Texas can:
- Develop policies and procedures that govern the use of AI systems to make or impact employment decisions:
- Include in these policies and procedures clear explanations of (i) the systems’ uses and purposes, (ii) the system’s decision-making processes, (iii) the permitted uses of such systems, (iv) the approved users of such systems, (v) training requirements for approved users, and (vi) the governing body overseeing the responsible use of such systems.
- Develop and implement an AI governance and risk-management framework with internal policies, procedures, and systems for review, flagging risks, and reporting.
- Ensure human oversight over AI systems.
- Train users and those tasked with overseeing the AI systems.
- Ensure there are sufficient resources committed to, and an adequate budget assigned to, overseeing and deploying AI systems and complying with the Act.
- Conduct due diligence on any AI vendors and developers before engagement and on any AI systems before use, including relating to how AI vendors and developers and AI systems test for, avoid, and remedy algorithmic bias, and to ensure AI vendors and developers are compliant with the Act’s requirements relating to developers of high-risk AI systems.
Footnotes
1 A copy of HB 1709 is available at: https://capitol.texas.gov/tlodocs/89R/billtext/pdf/HB01709I.pdf (last accessed: 9 January 2025).
2 Section 551.001(8).
3 Section 551.001(13). The Act defines a “consequential decision” as “a decision that has a material, legal, or similarly significant, effect on a consumer’s access to, cost of, or terms of: (A) a criminal case assessment, a sentencing or plea agreement analysis, or a pardon, parole, probation, or release decision; (B) education enrollment or an education opportunity; (C) employment or an employment opportunity; (D) a financial service; (E) an essential government service; (F) residential utility services; (G) a health-care service or treatment; (H) housing; (I) insurance; (J) a legal service; (K) a transportation service; (L) constitutionally protected services or products; or (M) elections or voting process.”
4 Id.
5 Section 551.005
6 Section 551.011
7 Section 551.006(d)
8 Section 551.005
9 Section 551.006(a)
10 Section 551.007(a)