Close Menu
Dailyza | Tech, Investments, Business & World News
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Facebook X (Twitter) Instagram
Trending
  • Gyver Secures €1.4 Million Pre-Seed Funding for Workforce Infrastructure
  • Elvy Secures €5.9M as Klarna Veteran Joins as Chair
  • Fractile Secures $220M to Challenge Nvidia in AI Chip Market
  • White Circle Secures $11M from AI Leaders to Enhance Enterprise Security
  • DesignVerse Secures €4.6 Million to Innovate Aviation Infrastructure
  • Dailyza: Highlights from the EU-Startups Summit 2026 in Malta
  • Dailyza: 2026 DayOne Accelerator Now Accepting Healthtech Applications!
  • SoftBank Invests $450M in Graphcore to Revitalize Chipmaker
Dailyza | Tech, Investments, Business & World NewsDailyza | Tech, Investments, Business & World News
Thursday, May 14
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Dailyza | Tech, Investments, Business & World News
Home»Politics
New York Governor Kathy Hochul speaking at a podium after signing the RAISE Act AI safety law in New York

Kathy Hochul Signs New York RAISE Act, Tightening AI Safety

21 December 2025 Politics No Comments6 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email

New York Governor Kathy Hochul has signed the RAISE Act, a sweeping new state law aimed at strengthening AI safety and forcing greater transparency from the largest developers of advanced systems. The move positions New York as the second U.S. state to enact major legislation focused on AI risk controls, following California’s recent push to set a statewide baseline for oversight.

The bill, first passed by state lawmakers in June, had faced months of intense lobbying from the tech industry. According to reporting cited by TechCrunch, Hochul had sought changes that would narrow the measure. Ultimately, she signed the original version while lawmakers agreed to revisit and potentially adopt her requested revisions next year—an outcome that underscores both the political momentum behind AI oversight and the unresolved debate over how strict those rules should be.

What the RAISE Act requires from major AI developers

At the heart of the RAISE Act is a set of disclosure and reporting obligations aimed at large AI developers—companies building models and systems powerful enough to raise concerns about misuse, failures, or unintended harms. The law requires covered developers to publish information about their safety protocols, a transparency step intended to give regulators and the public clearer insight into how companies test, constrain, and monitor high-capability systems.

Just as significant is the law’s incident reporting requirement. Under the RAISE Act, developers must report safety incidents to the state within 72 hours. The 72-hour window mirrors the urgency seen in other regulatory regimes, reflecting a belief among lawmakers that fast-moving AI deployments can create real-world impacts quickly, and that delayed disclosures can leave regulators and affected communities in the dark.

New oversight office inside New York’s financial regulator

The RAISE Act also creates a new office within the New York Department of Financial Services, giving the state a dedicated unit to monitor AI development. Locating the office within DFS signals that lawmakers see AI risk as part of a broader landscape of systemic and consumer harms—particularly as AI tools increasingly touch credit, insurance, fraud detection, and other financial services that fall under the department’s traditional remit.

For New York, the institutional design matters: DFS is known for active enforcement in banking and insurance. By placing AI monitoring within an agency with a history of supervision, the state is attempting to move beyond voluntary principles and toward operational oversight.

Penalties: up to $1 million, with higher fines for repeat violations

The law includes meaningful enforcement teeth. If companies fail to submit required safety reports or make false statements, they can be fined up to $1 million, with penalties rising to $3 million for subsequent violations. While those numbers may be modest compared with the revenues of the largest AI companies, New York lawmakers are betting that the combination of financial penalties, reputational risk, and the compliance burden of ongoing reporting will push developers to formalize internal safety processes.

In practice, the deterrent effect may depend on how aggressively the state enforces the law and how clearly it defines what constitutes a reportable “safety incident.” Those details will influence whether the RAISE Act becomes a narrow compliance exercise or a broader lever for shaping how frontier AI systems are deployed.

Hochul points to California and criticizes federal inaction

In announcing the signing, Governor Kathy Hochul explicitly framed New York’s move as part of a growing state-led effort to establish consistent guardrails while Washington debates its next steps. “This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public,” Hochul said.

That message highlights a central tension in U.S. tech policy: states are increasingly stepping in where Congress has not produced comprehensive rules. For AI developers operating nationally, this can create a patchwork of requirements. For lawmakers, it is also a way to force momentum, using large state economies to nudge industry standards in the absence of federal legislation.

Lawmakers tout toughness as industry lobbying intensifies

Supporters of the bill cast the signing as a direct rebuttal to industry pressure. State Senator Andrew Gounardes, one of the RAISE Act’s sponsors, celebrated the outcome publicly, arguing that tech companies had tried to weaken the measure. “Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country,” he wrote.

The political messaging reflects a broader shift: AI regulation is no longer confined to academic debate or federal agency white papers. It has become a public-facing issue where elected officials increasingly position themselves as defenders of consumers, workers, and public safety against opaque technologies and powerful corporations.

How Big Tech and leading AI labs are responding

Notably, some leading AI companies have signaled support for New York’s approach while still urging federal action. Both OpenAI and Anthropic expressed support for the bill while calling for national legislation, a stance that aligns with the industry’s preference for uniform rules rather than multiple state-by-state frameworks.

In comments reported by the New York Times and referenced by TechCrunch, Anthropic’s head of external affairs Sarah Heck pointed to the broader significance of two major states moving in the same direction, suggesting that state transparency laws could accelerate a national policy conversation. For companies building frontier models, the emerging reality is that transparency and incident reporting are becoming baseline expectations rather than optional commitments.

What happens next for New York’s AI rules

While Hochul signed the original bill, the agreement to consider revisions next year means the RAISE Act may still evolve. Key questions include how narrowly “large AI developers” are defined, what qualifies as a reportable incident, and whether future amendments adjust the scope to reduce burdens on smaller firms or open-source projects.

For now, New York has sent a clear signal: in the absence of federal consensus, states with outsized economic influence are willing to set enforceable standards for AI transparency and safety incident reporting. As California and New York align on core principles, the pressure increases on Congress and federal regulators to decide whether to harmonize these rules nationally—or risk letting state laws define the country’s de facto AI governance model.

Previous ArticleJon Medved on Tech, Longevity, and Israel’s VC Future
Next Article Home Office: 803 migrants cross Channel in December record
Aron Bowers
  • Website

Keep Reading

Ukraine Experts Analyze EU’s €160 Million DefenceTech Investment

The Data Explosion Transforming Our World

Dailyza: How Short Form Content Boosts Engagement Across Platforms

Sillage Secures €1.7 Million to Enhance Sales Team Efficiency

Donald Trump Ousts Pam Bondi Over Epstein Files and Rival Probes

EU Inc. under scrutiny as founders, VCs and lawyers speak out

Add A Comment

Leave A Reply Cancel Reply

Gyver Secures €1.4 Million Pre-Seed Funding for Workforce Infrastructure

Venture Capital 14 May 2026

Gyver, a Brescia-based startup, has announced €1.4 million in pre-seed funding to enhance workforce infrastructure in Europe.

Dailyza: Highlights from the EU-Startups Summit 2026 in Malta

Dailyza: 2026 DayOne Accelerator Now Accepting Healthtech Applications!

Ditto Secures €7.6 Million to Simplify Doctor-Patient Communication

Cellply Revolutionizes Cancer Treatment with Innovative Tools

A-Star Secures $450M to Expand Investment Portfolio

Dailyza Unveils African-Startups.com to Boost Startup Ecosystem

Adfin Secures €15.3 Million to Revolutionize Revenue Automation

Personio and Forto Founders Invest in Regulate’s €1.4M Funding

NanoStruct Secures €2.6 Million to Revolutionize Food Safety

AlterEcho Emerges Victorious at EU-Startups Summit 2026 Pitch

Dailyza Highlights 8 Agtech Startups to Watch According to VCs

Ramp Secures $750M Funding from GIC, Iconiq Capital at $40B Valuation

Tencent Backs DeepSeek in $4B Funding Round at $50B Valuation

Dailyza Explores £7.5M Arāya Sie Fund Empowering Women in Deeptech

Dailyza | Tech, Investments, Business & World News
  • Startups
  • Contact
  • About Us
© 2026 Dailyza

Type above and press Enter to search. Press Esc to cancel.