HB Ad Slot
HB Mobile Ad Slot
The Coming Battle Over New York's RAISE Act
Monday, June 30, 2025

“Would you let your child ride in a car with no seatbelt or airbags? Of course not. So why would you let them use powerful AI without basic safeguards in place?” 

That is the question New York State Senator Andrew Gounardes posed when advocating for his Responsible AI Safety and Education (RAISE) Act, which recently passed both chambers of the state legislature with strong bipartisan support.

The bill, crafted by New York Assemblyman and former Palantir employee Alex Bores, requires developers of frontier AI models to implement detailed and transparent safety protocols and report major violations of those rules. Specifically, lawmakers are worried about the potential of “concerning AI model behavior" and bad actors stealing AI models. This bill also imposes severe financial penalties in the event that an AI model causes or enables any of the following:

  • Death or injury of more than 100 people
     
  • Economic damages exceeding $1 billion
     
  • Assistance in creating biological or chemical weapons
     
  • Facilitation of large-scale criminal activity

The RAISE Act was shaped, in part, by the failure of California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047). SB 1047 was vetoed in 2024 over concerns it was too broad and could stifle innovation. In response, the RAISE Act takes a more focused approach. California’s bill included provisions imposing sweeping safety mandates on high-compute AI models. In contrast, New York’s version narrows its focus and avoids the more controversial provisions that sank California’s effort. Notably, the RAISE Act does not require AI models to have a "kill switch" that deactivates the model, nor does it hold companies liable if they simply add a layer on top of frontier AI models. 

The bill's author, Bores, has gone on record saying that he believes this policy is “common sense.” However, it is unclear if Governor Kathy Hochul will sign the bill into law. Tech: NYC, a trade group that represents Google and Meta, has strongly urged her to veto the bill. Additionally, on June 13th, Bores claimed via X that Andreessen Horowitz, a prominent Silicon Valley venture capital firm known for backing major tech companies, had been calling members of New York’s legislature in an effort to rally opposition to the bill.

 While one might assume that these companies are opposing the bill merely out of opposition to regulation of any type, leaders in the field have pinpointed specific flaws in the bill. Specifically, they claim it will stifle innovation and potentially force AI companies to leave New York or make their products unavailable for New Yorkers, undermining the state’s position as a technology capital.

One of the most commonly aired critiques is that the bill holds developers liable for how third parties use their tools, despite the fact that this is often outside the control of developers. Not only does this law eliminate existing legal protections that are used to mitigate the harms caused by third-party misuse, but it also disincentivizes the creation of freely available AI models. Access to both open-source and commercial AI systems is vital to researchers, startups, and helps fuel the collaborative culture that drives innovation in the field. Yet the threat of liability for downstream misuse may deter companies from releasing their models in New York. The Chamber of Progress released a statement claiming that this bill's passage would be tantamount to “an eviction notice” for New York’s AI startups.

This bill has also faced opposition from the tech industry for its misplaced targeting of the models themselves rather than the humans who misuse them. In an editorial for City Journal, Will Reinhart argued that the improper use of AI models stemmed from human intent rather than any inherent flaws of the model. Further, he noted that market solutions such as content filters are more effective, evolving to prioritize actual safety in a way that is impossible for a bill that merely works as a checklist. 

The opposing viewpoints over this proposed law are symbolic of a broader struggle between AI-safety advocates and the tech industry to influence legislation on the matter. A victory for either side would help set the early tone for the future of this battle, either providing a template for state-level regulations or discouraging their proliferation.

Of course, this could all be moot. Trump’s “One Big Beautiful Bill Act,” which recently passed the House, included a 10-year moratorium on AI regulation. While it has been softened to allow state-level regulation at the risk of losing federal funding, it is unknown if New York is willing to make this compromise. While Trump's bill still needs to pass the Senate and will likely be amended in the process, if it does pass in its current form, it has the potential to neutralize the RAISE Act altogether. Nevertheless, until then, both sides of the AI-safety debate are patiently waiting for—and actively trying to influence—Hochul’s decision, knowing that the repercussions may lay the groundwork for the future of AI regulation.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters