New York Governor Kathy Hochul has signed the RAISE Act, a sweeping new state law aimed at strengthening AI safety and forcing greater transparency from the largest developers of advanced systems. The move positions New York as the second U.S. state to enact major legislation focused on AI risk controls, following California’s recent push to set a statewide baseline for oversight.
The bill, first passed by state lawmakers in June, had faced months of intense lobbying from the tech industry. According to reporting cited by TechCrunch, Hochul had sought changes that would narrow the measure. Ultimately, she signed the original version while lawmakers agreed to revisit and potentially adopt her requested revisions next year—an outcome that underscores both the political momentum behind AI oversight and the unresolved debate over how strict those rules should be.
What the RAISE Act requires from major AI developers
At the heart of the RAISE Act is a set of disclosure and reporting obligations aimed at large AI developers—companies building models and systems powerful enough to raise concerns about misuse, failures, or unintended harms. The law requires covered developers to publish information about their safety protocols, a transparency step intended to give regulators and the public clearer insight into how companies test, constrain, and monitor high-capability systems.
Just as significant is the law’s incident reporting requirement. Under the RAISE Act, developers must report safety incidents to the state within 72 hours. The 72-hour window mirrors the urgency seen in other regulatory regimes, reflecting a belief among lawmakers that fast-moving AI deployments can create real-world impacts quickly, and that delayed disclosures can leave regulators and affected communities in the dark.
New oversight office inside New York’s financial regulator
The RAISE Act also creates a new office within the New York Department of Financial Services, giving the state a dedicated unit to monitor AI development. Locating the office within DFS signals that lawmakers see AI risk as part of a broader landscape of systemic and consumer harms—particularly as AI tools increasingly touch credit, insurance, fraud detection, and other financial services that fall under the department’s traditional remit.
For New York, the institutional design matters: DFS is known for active enforcement in banking and insurance. By placing AI monitoring within an agency with a history of supervision, the state is attempting to move beyond voluntary principles and toward operational oversight.
Penalties: up to $1 million, with higher fines for repeat violations
The law includes meaningful enforcement teeth. If companies fail to submit required safety reports or make false statements, they can be fined up to $1 million, with penalties rising to $3 million for subsequent violations. While those numbers may be modest compared with the revenues of the largest AI companies, New York lawmakers are betting that the combination of financial penalties, reputational risk, and the compliance burden of ongoing reporting will push developers to formalize internal safety processes.
In practice, the deterrent effect may depend on how aggressively the state enforces the law and how clearly it defines what constitutes a reportable “safety incident.” Those details will influence whether the RAISE Act becomes a narrow compliance exercise or a broader lever for shaping how frontier AI systems are deployed.
Hochul points to California and criticizes federal inaction
In announcing the signing, Governor Kathy Hochul explicitly framed New York’s move as part of a growing state-led effort to establish consistent guardrails while Washington debates its next steps. “This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public,” Hochul said.
That message highlights a central tension in U.S. tech policy: states are increasingly stepping in where Congress has not produced comprehensive rules. For AI developers operating nationally, this can create a patchwork of requirements. For lawmakers, it is also a way to force momentum, using large state economies to nudge industry standards in the absence of federal legislation.
Lawmakers tout toughness as industry lobbying intensifies
Supporters of the bill cast the signing as a direct rebuttal to industry pressure. State Senator Andrew Gounardes, one of the RAISE Act’s sponsors, celebrated the outcome publicly, arguing that tech companies had tried to weaken the measure. “Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country,” he wrote.
The political messaging reflects a broader shift: AI regulation is no longer confined to academic debate or federal agency white papers. It has become a public-facing issue where elected officials increasingly position themselves as defenders of consumers, workers, and public safety against opaque technologies and powerful corporations.
How Big Tech and leading AI labs are responding
Notably, some leading AI companies have signaled support for New York’s approach while still urging federal action. Both OpenAI and Anthropic expressed support for the bill while calling for national legislation, a stance that aligns with the industry’s preference for uniform rules rather than multiple state-by-state frameworks.
In comments reported by the New York Times and referenced by TechCrunch, Anthropic’s head of external affairs Sarah Heck pointed to the broader significance of two major states moving in the same direction, suggesting that state transparency laws could accelerate a national policy conversation. For companies building frontier models, the emerging reality is that transparency and incident reporting are becoming baseline expectations rather than optional commitments.
What happens next for New York’s AI rules
While Hochul signed the original bill, the agreement to consider revisions next year means the RAISE Act may still evolve. Key questions include how narrowly “large AI developers” are defined, what qualifies as a reportable incident, and whether future amendments adjust the scope to reduce burdens on smaller firms or open-source projects.
For now, New York has sent a clear signal: in the absence of federal consensus, states with outsized economic influence are willing to set enforceable standards for AI transparency and safety incident reporting. As California and New York align on core principles, the pressure increases on Congress and federal regulators to decide whether to harmonize these rules nationally—or risk letting state laws define the country’s de facto AI governance model.

