Close Menu
Dailyza | Tech, Investments, Business & World News
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Facebook X (Twitter) Instagram
Trending
  • NeoCognition Secures $40M to Train On-the-Job AI Agents
  • Nox Mobility Secures €2 Million to Revitalize Europe’s Night Trains
  • Christoph Sollich to Speak at EU-Startups Summit 2026 in Malta
  • Bpifrance and Blast Invest €27M in UNIVITY’s Telecom Space Network
  • Cloudsmith Secures €61.5 Million Series C for AI Supply Chains
  • Sillage Secures €1.7 Million to Enhance Sales Team Efficiency
  • Cloudsmith Secures $72M from Insight Partners Amid Cybersecurity Woes
  • Firenze Secures €6.8 Million to Expand Team Amid Growing Demand
Dailyza | Tech, Investments, Business & World NewsDailyza | Tech, Investments, Business & World News
Friday, April 24
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Dailyza | Tech, Investments, Business & World News
Home»Politics
New York Governor Kathy Hochul speaking at a podium after signing the RAISE Act AI safety law in New York

Kathy Hochul Signs New York RAISE Act, Tightening AI Safety

21 December 2025 Politics No Comments6 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email

New York Governor Kathy Hochul has signed the RAISE Act, a sweeping new state law aimed at strengthening AI safety and forcing greater transparency from the largest developers of advanced systems. The move positions New York as the second U.S. state to enact major legislation focused on AI risk controls, following California’s recent push to set a statewide baseline for oversight.

The bill, first passed by state lawmakers in June, had faced months of intense lobbying from the tech industry. According to reporting cited by TechCrunch, Hochul had sought changes that would narrow the measure. Ultimately, she signed the original version while lawmakers agreed to revisit and potentially adopt her requested revisions next year—an outcome that underscores both the political momentum behind AI oversight and the unresolved debate over how strict those rules should be.

What the RAISE Act requires from major AI developers

At the heart of the RAISE Act is a set of disclosure and reporting obligations aimed at large AI developers—companies building models and systems powerful enough to raise concerns about misuse, failures, or unintended harms. The law requires covered developers to publish information about their safety protocols, a transparency step intended to give regulators and the public clearer insight into how companies test, constrain, and monitor high-capability systems.

Just as significant is the law’s incident reporting requirement. Under the RAISE Act, developers must report safety incidents to the state within 72 hours. The 72-hour window mirrors the urgency seen in other regulatory regimes, reflecting a belief among lawmakers that fast-moving AI deployments can create real-world impacts quickly, and that delayed disclosures can leave regulators and affected communities in the dark.

New oversight office inside New York’s financial regulator

The RAISE Act also creates a new office within the New York Department of Financial Services, giving the state a dedicated unit to monitor AI development. Locating the office within DFS signals that lawmakers see AI risk as part of a broader landscape of systemic and consumer harms—particularly as AI tools increasingly touch credit, insurance, fraud detection, and other financial services that fall under the department’s traditional remit.

For New York, the institutional design matters: DFS is known for active enforcement in banking and insurance. By placing AI monitoring within an agency with a history of supervision, the state is attempting to move beyond voluntary principles and toward operational oversight.

Penalties: up to $1 million, with higher fines for repeat violations

The law includes meaningful enforcement teeth. If companies fail to submit required safety reports or make false statements, they can be fined up to $1 million, with penalties rising to $3 million for subsequent violations. While those numbers may be modest compared with the revenues of the largest AI companies, New York lawmakers are betting that the combination of financial penalties, reputational risk, and the compliance burden of ongoing reporting will push developers to formalize internal safety processes.

In practice, the deterrent effect may depend on how aggressively the state enforces the law and how clearly it defines what constitutes a reportable “safety incident.” Those details will influence whether the RAISE Act becomes a narrow compliance exercise or a broader lever for shaping how frontier AI systems are deployed.

Hochul points to California and criticizes federal inaction

In announcing the signing, Governor Kathy Hochul explicitly framed New York’s move as part of a growing state-led effort to establish consistent guardrails while Washington debates its next steps. “This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public,” Hochul said.

That message highlights a central tension in U.S. tech policy: states are increasingly stepping in where Congress has not produced comprehensive rules. For AI developers operating nationally, this can create a patchwork of requirements. For lawmakers, it is also a way to force momentum, using large state economies to nudge industry standards in the absence of federal legislation.

Lawmakers tout toughness as industry lobbying intensifies

Supporters of the bill cast the signing as a direct rebuttal to industry pressure. State Senator Andrew Gounardes, one of the RAISE Act’s sponsors, celebrated the outcome publicly, arguing that tech companies had tried to weaken the measure. “Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country,” he wrote.

The political messaging reflects a broader shift: AI regulation is no longer confined to academic debate or federal agency white papers. It has become a public-facing issue where elected officials increasingly position themselves as defenders of consumers, workers, and public safety against opaque technologies and powerful corporations.

How Big Tech and leading AI labs are responding

Notably, some leading AI companies have signaled support for New York’s approach while still urging federal action. Both OpenAI and Anthropic expressed support for the bill while calling for national legislation, a stance that aligns with the industry’s preference for uniform rules rather than multiple state-by-state frameworks.

In comments reported by the New York Times and referenced by TechCrunch, Anthropic’s head of external affairs Sarah Heck pointed to the broader significance of two major states moving in the same direction, suggesting that state transparency laws could accelerate a national policy conversation. For companies building frontier models, the emerging reality is that transparency and incident reporting are becoming baseline expectations rather than optional commitments.

What happens next for New York’s AI rules

While Hochul signed the original bill, the agreement to consider revisions next year means the RAISE Act may still evolve. Key questions include how narrowly “large AI developers” are defined, what qualifies as a reportable incident, and whether future amendments adjust the scope to reduce burdens on smaller firms or open-source projects.

For now, New York has sent a clear signal: in the absence of federal consensus, states with outsized economic influence are willing to set enforceable standards for AI transparency and safety incident reporting. As California and New York align on core principles, the pressure increases on Congress and federal regulators to decide whether to harmonize these rules nationally—or risk letting state laws define the country’s de facto AI governance model.

Previous ArticleJon Medved on Tech, Longevity, and Israel’s VC Future
Next Article Home Office: 803 migrants cross Channel in December record
Aron Bowers
  • Website

Keep Reading

Donald Trump Ousts Pam Bondi Over Epstein Files and Rival Probes

EU Inc. under scrutiny as founders, VCs and lawyers speak out

US Reputation Plummets Among Key Global Allies in Polls

Trump Tariff Ruling Jolts EU–US Trade and Tech Relations

Controversial Trump and Epstein Statue Tests Free Speech

UK Defends Iran Stance as Starmer Deploys Extra Jets to Qatar

Add A Comment

Leave A Reply Cancel Reply

Nox Mobility Secures €2 Million to Revitalize Europe’s Night Trains

Travel 24 April 2026

Nox Mobility raises €2 million to enhance night train services across Europe, aiming for sustainable travel solutions.

Christoph Sollich to Speak at EU-Startups Summit 2026 in Malta

Kurma Partners Secures €215M for Biofund IV, Reaches €1B AUM

EU-Startups Summit 2026: Essential Networking Guide Revealed

Kurma Partners Secures €215 Million for Biofund IV in Paris

McWin Capital Partners Invests €10M in Incapto’s Smart Subscriptions

Epoch Biodesign Launches London Facility After €10.3 Million Raise

Lululemon Appoints Former Nike Executive Heidi O’Neill CEO

Ex-Stripe Executives Raise €7.5M to Streamline Startup Finances

Nox Mobility Secures €2 Million to Revamp European Night Trains

BetHog Secures €8.5 Million Series A to Expand AI Live Dealer Platform

Realm Secures €3.8 Million to Transform Enterprise Sales with AI

ATMOS Secures €25.7M to Develop Space Cargo Highway Initiative

Dailyza: Key Steps for Deeptech Startups to Attract Investors

Dailyza: Exploring the Future of Travel with AI Insights

Dailyza | Tech, Investments, Business & World News
  • Startups
  • Contact
  • About Us
© 2026 Dailyza

Type above and press Enter to search. Press Esc to cancel.