Close Menu
Dailyza | Tech, Investments, Business & World News
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Facebook X (Twitter) Instagram
Trending
  • Orcan Energy targets AI data centers with waste-heat power
  • Donald Trump Ousts Pam Bondi Over Epstein Files and Rival Probes
  • Monzo exits US market to double down on Europe and IPO bid
  • Jake Paul’s Anti Fund bets on attention as a VC edge
  • Brilliance secures €6M to advance integrated RGB laser chips
  • Wearable Robotics secures €5M to advance rehab exoskeletons
  • Paysend secures $25M to speed up global money transfers
  • SMEY unveils Lipid Atlas, an AI platform for lipidomics
Dailyza | Tech, Investments, Business & World NewsDailyza | Tech, Investments, Business & World News
Saturday, April 4
  • Startups
  • Venture Capital
  • World
  • Economy
  • Politics
  • Science
  • Technology
  • Travel
  • Culture
Dailyza | Tech, Investments, Business & World News
Home»Politics
New York Governor Kathy Hochul speaking at a podium after signing the RAISE Act AI safety law in New York

Kathy Hochul Signs New York RAISE Act, Tightening AI Safety

21 December 2025 Politics No Comments6 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email

New York Governor Kathy Hochul has signed the RAISE Act, a sweeping new state law aimed at strengthening AI safety and forcing greater transparency from the largest developers of advanced systems. The move positions New York as the second U.S. state to enact major legislation focused on AI risk controls, following California’s recent push to set a statewide baseline for oversight.

The bill, first passed by state lawmakers in June, had faced months of intense lobbying from the tech industry. According to reporting cited by TechCrunch, Hochul had sought changes that would narrow the measure. Ultimately, she signed the original version while lawmakers agreed to revisit and potentially adopt her requested revisions next year—an outcome that underscores both the political momentum behind AI oversight and the unresolved debate over how strict those rules should be.

What the RAISE Act requires from major AI developers

At the heart of the RAISE Act is a set of disclosure and reporting obligations aimed at large AI developers—companies building models and systems powerful enough to raise concerns about misuse, failures, or unintended harms. The law requires covered developers to publish information about their safety protocols, a transparency step intended to give regulators and the public clearer insight into how companies test, constrain, and monitor high-capability systems.

Just as significant is the law’s incident reporting requirement. Under the RAISE Act, developers must report safety incidents to the state within 72 hours. The 72-hour window mirrors the urgency seen in other regulatory regimes, reflecting a belief among lawmakers that fast-moving AI deployments can create real-world impacts quickly, and that delayed disclosures can leave regulators and affected communities in the dark.

New oversight office inside New York’s financial regulator

The RAISE Act also creates a new office within the New York Department of Financial Services, giving the state a dedicated unit to monitor AI development. Locating the office within DFS signals that lawmakers see AI risk as part of a broader landscape of systemic and consumer harms—particularly as AI tools increasingly touch credit, insurance, fraud detection, and other financial services that fall under the department’s traditional remit.

For New York, the institutional design matters: DFS is known for active enforcement in banking and insurance. By placing AI monitoring within an agency with a history of supervision, the state is attempting to move beyond voluntary principles and toward operational oversight.

Penalties: up to $1 million, with higher fines for repeat violations

The law includes meaningful enforcement teeth. If companies fail to submit required safety reports or make false statements, they can be fined up to $1 million, with penalties rising to $3 million for subsequent violations. While those numbers may be modest compared with the revenues of the largest AI companies, New York lawmakers are betting that the combination of financial penalties, reputational risk, and the compliance burden of ongoing reporting will push developers to formalize internal safety processes.

In practice, the deterrent effect may depend on how aggressively the state enforces the law and how clearly it defines what constitutes a reportable “safety incident.” Those details will influence whether the RAISE Act becomes a narrow compliance exercise or a broader lever for shaping how frontier AI systems are deployed.

Hochul points to California and criticizes federal inaction

In announcing the signing, Governor Kathy Hochul explicitly framed New York’s move as part of a growing state-led effort to establish consistent guardrails while Washington debates its next steps. “This law builds on California’s recently adopted framework, creating a unified benchmark among the country’s leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public,” Hochul said.

That message highlights a central tension in U.S. tech policy: states are increasingly stepping in where Congress has not produced comprehensive rules. For AI developers operating nationally, this can create a patchwork of requirements. For lawmakers, it is also a way to force momentum, using large state economies to nudge industry standards in the absence of federal legislation.

Lawmakers tout toughness as industry lobbying intensifies

Supporters of the bill cast the signing as a direct rebuttal to industry pressure. State Senator Andrew Gounardes, one of the RAISE Act’s sponsors, celebrated the outcome publicly, arguing that tech companies had tried to weaken the measure. “Big Tech thought they could weasel their way into killing our bill. We shut them down and passed the strongest AI safety law in the country,” he wrote.

The political messaging reflects a broader shift: AI regulation is no longer confined to academic debate or federal agency white papers. It has become a public-facing issue where elected officials increasingly position themselves as defenders of consumers, workers, and public safety against opaque technologies and powerful corporations.

How Big Tech and leading AI labs are responding

Notably, some leading AI companies have signaled support for New York’s approach while still urging federal action. Both OpenAI and Anthropic expressed support for the bill while calling for national legislation, a stance that aligns with the industry’s preference for uniform rules rather than multiple state-by-state frameworks.

In comments reported by the New York Times and referenced by TechCrunch, Anthropic’s head of external affairs Sarah Heck pointed to the broader significance of two major states moving in the same direction, suggesting that state transparency laws could accelerate a national policy conversation. For companies building frontier models, the emerging reality is that transparency and incident reporting are becoming baseline expectations rather than optional commitments.

What happens next for New York’s AI rules

While Hochul signed the original bill, the agreement to consider revisions next year means the RAISE Act may still evolve. Key questions include how narrowly “large AI developers” are defined, what qualifies as a reportable incident, and whether future amendments adjust the scope to reduce burdens on smaller firms or open-source projects.

For now, New York has sent a clear signal: in the absence of federal consensus, states with outsized economic influence are willing to set enforceable standards for AI transparency and safety incident reporting. As California and New York align on core principles, the pressure increases on Congress and federal regulators to decide whether to harmonize these rules nationally—or risk letting state laws define the country’s de facto AI governance model.

Previous ArticleJon Medved on Tech, Longevity, and Israel’s VC Future
Next Article Home Office: 803 migrants cross Channel in December record
Aron Bowers
  • Website

Keep Reading

Donald Trump Ousts Pam Bondi Over Epstein Files and Rival Probes

EU Inc. under scrutiny as founders, VCs and lawyers speak out

US Reputation Plummets Among Key Global Allies in Polls

Trump Tariff Ruling Jolts EU–US Trade and Tech Relations

Controversial Trump and Epstein Statue Tests Free Speech

UK Defends Iran Stance as Starmer Deploys Extra Jets to Qatar

Add A Comment

Leave A Reply Cancel Reply

Jake Paul’s Anti Fund bets on attention as a VC edge

Venture Capital 3 April 2026

Anti Fund, co-founded by Jake Paul, is pitching a new venture model built on attention, disciplined execution and long-term trust, not celebrity hype.

SMEY unveils Lipid Atlas, an AI platform for lipidomics

Barclays backs £130M ‘Women Backing Women’ VC fund push

European startups secure fresh capital in early April surge

Generare secures €20M from Alven, Daphni to turbocharge drug R&D

Rupa Popat on Arāya Ventures and the Future of Impact VC

Generare raises €20M to decode microbial genomes for drugs

British Business Bank Unites Major Investors in New Fund

Runway Fund backs early AI and media startups worldwide

Connectome secures $2M to detect silent brain decline early

Kleiner Perkins Backs Saronic in $1.75B Bet on US Autonomy

MOVEMENTS secures €300k pre-seed to power values-led campaigns

EU-Startups Summit 2026 unveils leading space innovators

Metafuels wins €1.92M Dutch grant for Rotterdam e-SAF plant

Alice & Bob wins €3.4M ARPA-E grant for quantum magnets

Dailyza | Tech, Investments, Business & World News
  • Startups
  • Contact
  • About Us
© 2026 Dailyza

Type above and press Enter to search. Press Esc to cancel.