Virginia Governor Glenn Youngkin vetoed an artificial intelligence (“AI”) bill on March 24 that would have regulated how employers used automation in the hiring process. While the veto relieves employers of a new layer of regulation, the bill represented one of several state-level efforts to prevent potential harmful uses of AI in the employment context.
The Virginia General Assembly passed the “High-Risk Artificial Intelligence Developer and Deployer Act” during the 2025 legislative session. The bill would have regulated both creators and users of AI technology across multiple use cases, including employment. It defined “high-risk artificial intelligence” to cover any AI systems intended to make autonomous consequential decisions, or serve as a substantial factor in making consequential decisions. As relevant to the employment context, “consequential decisions,” included decisions about “access to employment.”
The law would have required Virginia employers to implement safeguards to prevent potential harm from “high-risk” AI, including adopting a risk management policy and conducting an impact assessment for the use of the technology. It also would have required users of covered AI systems to disclose their use to affected consumers, including employment applicants. The bill called for enforcement by the Virginia Attorney General only, with designated civil penalties for violations and no private right of action. But it also specified that each violation would be treated separately, so it created the potential for significant penalties if, for example, an employer failed to disclose its use of AI to a large group of applicants, resulting in a $1,000 penalty for every applicant impacted.
Youngkin said he vetoed the bill because he feared it would undermine Virginia’s progress in attracting AI innovators to the Commonwealth, including thousands of new tech startups. He also said existing laws related to discrimination, privacy and data use already provided necessary consumer protections related to AI. Had the bill avoided the governor’s veto pen, Virginia would have joined Colorado as the first two states to approve comprehensive statutes specifically governing the use of AI in the employment context. The Colorado law, passed in 2024, will become effective on February 1, 2026 and has many similarities to the bill Youngkin vetoed, including requirements that users of high-risk AI technology exercise reasonable care to prevent algorithmic discrimination.
Other states have laws that touch on AI-related topics, but lack the level of detail and specificity contained in the Colorado law. In several more states, attempts to regulate the use of AI in the employment context are meeting similar fates to Virginia’s law. For example, Texas legislators recently abandoned efforts to pass an AI bill modelled after the Colorado legislation. Similar bills have failed or appear likely to fail in Georgia, Hawaii, Maryland, New Mexico and Vermont. And even in states with more employment-related regulations like Connecticut, Democratic Governor Ned Lamont has resisted efforts by lawmakers to push through AI regulations. The exception to the trend may be California, where legislators are continuing to pursue legislation – A.B. 1018 – that closely resembles both the Colorado and Virginia bills with even steeper penalties.
In all, states remain interested in regulation of emerging AI tools, but have yet to align on the best way to handle such regulation in the employment context. Still, employers should use caution when using automated tools or outsourcing decision-making to third parties that use such technology. Existing laws, including the Fair Credit Reporting Act and Title VII of the Civil Rights Act, still apply to these new technologies. And while momentum for new state-level AI regulation seems stalled, employers should monitor state level developments as similar proposed laws proceed through state legislatures.