HappycapyGuide

By Connie · Last reviewed: April 2026 — pricing & tools verified · This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.

AI Policy

New York RAISE Act: AI Safety Compliance Guide for 2026

April 5, 2026  ·  8 min read  ·  Happycapy Guide
TL;DR
  • New York's RAISE Act is the most comprehensive US state AI safety law — signed December 2025, enforcement starts January 1, 2027
  • Applies to AI companies with $500M+ revenue that develop frontier models (10²⁶+ FLOPs)
  • Requires published safety protocols, 72-hour incident reporting, and annual safety reviews
  • Fines: up to $1M first violation / $3M repeat violations — enforced by the NY Attorney General
  • Companies have until January 2027 to comply — start now, preparation takes 6–12 months

New York quietly became the most aggressive state for AI regulation in the United States. The Responsible AI Safety and Education (RAISE) Act — signed by Governor Kathy Hochul on December 19, 2025 — creates binding obligations for the developers of the world's most powerful AI models. If your company builds a frontier AI model and operates anywhere in New York, you are covered.

Full enforcement begins January 1, 2027. That gives companies approximately nine months to build compliance frameworks. Here is everything you need to know.

What Is the RAISE Act?

The RAISE Act (Article 44-B of New York state law) establishes safety and transparency obligations for developers of frontier AI models. It was passed in December 2025 with bipartisan support and signed into law with amendments that took effect in March 2026. It is the first major state law in the US to specifically target the developers of the most capable AI systems — not just deployers or users.

The law focuses on preventing "critical harms" — defined as events that kill or seriously injure 100 or more people, or cause $1 billion or more in damages. This includes harms enabled by AI in the creation of biological, chemical, radiological, or nuclear weapons, and AI systems engaging in criminal behavior with limited human intervention.

Who Does the RAISE Act Cover?

TermDefinition
Large DeveloperCompany with annual revenues over $500 million that develops frontier models operating in New York
Frontier ModelAI model trained using more than 10²⁶ FLOPs, or produced via knowledge distillation from a qualifying frontier model
Knowledge DistillationSupervised learning technique using a larger model's outputs to train a smaller model with similar capabilities — covered under the law
ExemptionsAccredited colleges and universities conducting academic research (without transferring IP to commercial entities)

In practice, this covers the major frontier labs: Anthropic, OpenAI, Google DeepMind, Meta AI, xAI, and any other company that has trained a model at the frontier scale. Smaller AI application companies and startups building on top of APIs are not directly covered — but they may be indirectly affected if the underlying model developers must update policies.

Key Compliance Obligations

1. Safety and Security Protocols

Before deploying a frontier model, large developers must publish written safety and security protocols. These must specify:

Protocols must be submitted to the New York Attorney General and the Division of Homeland Security and Emergency Services (DHSES). Companies can redact sensitive security details from the public version, but must retain unredacted copies for the model's deployment lifetime plus five years.

2. Incident Reporting — 72 Hours

Strict timeline: Safety incidents must be reported to DHSES and the AG within 72 hours of learning of the incident. This is significantly stricter than California's 15-day reporting window.

A "safety incident" includes: known critical harm, a frontier model autonomously engaging in behavior not requested by any user, theft or unauthorized release of model weights, and critical failure of technical or administrative controls. Reports must include the incident date, why it qualifies, and a plain-language description.

3. Annual Reviews and Testing

Large developers must conduct annual reviews of their safety protocols, updating and republishing them when capabilities or industry best practices change. All testing results used to assess frontier models must be documented in sufficient detail for third-party replication.

Penalties

ViolationMaximum Civil Penalty
First violation$1,000,000
Subsequent violations$3,000,000

The NY Attorney General has exclusive enforcement authority — there is no private right of action. A dedicated AI oversight office within the Department of Financial Services has been established to manage compliance and transparency reporting.

RAISE Act vs. Other US AI Laws

StateLawKey RequirementEffective Date
New YorkRAISE ActSafety protocols + 72-hr incident reporting + annual reviewsJan 1, 2027
CaliforniaSB 53 / TFAIASafety details + whistleblower protections + 15-day incident reportingJan 1, 2026
ColoradoSB 24-205High-risk AI disclosure + anti-discrimination requirementsJun 30, 2026
TexasHB 149Prohibits AI incitement to harm, biometric capture, political discrimination2026
Navigating AI compliance? Use all the top models.

Happycapy gives you Claude, GPT-5.4, Gemini, and Grok in one platform — Pro at $17/month.

Try Happycapy Free

Federal vs. State Conflict

The Trump administration is actively challenging state AI laws, arguing they create a "patchwork of 50 different regulatory regimes" that hinders innovation. An executive order directs the FTC to issue a policy statement on when state AI laws should be preempted under Section 5 of the FTC Act.

Governor Hochul has signaled California and New York will hold their ground. In response to the DoD's blacklisting of Anthropic as a supply-chain risk, Hochul stated that New York "will make its own determinations regarding AI risks." Legal challenges to both the RAISE Act and California's framework are expected in 2026.

The outcome matters enormously for AI companies: if federal preemption fails, every large AI developer will need to comply with a growing matrix of state laws. If it succeeds, a single federal framework becomes the baseline — likely one with fewer restrictions.

Compliance Checklist (Pre-January 2027)

ActionTimelineOwner
Determine if your company is a "large developer" under the $500M thresholdQ2 2026Legal + Finance
Identify all frontier models (≥10²⁶ FLOPs) in development or deploymentQ2 2026ML Engineering
Draft and publish written safety and security protocolsQ3 2026Safety + Legal
Submit protocols to DHSES and NY AGQ3 2026Legal
Build 72-hour incident detection and reporting pipelineQ3–Q4 2026Security + Engineering
Implement annual testing documentation protocolsQ4 2026Safety + ML
Designate senior personnel responsible for complianceQ4 2026HR + Legal
Full enforcement compliance readyJan 1, 2027All teams

Frequently Asked Questions

What is New York's RAISE Act?

The Responsible AI Safety and Education Act is a New York state law signed December 19, 2025. It requires developers of frontier AI models to publish safety protocols, report incidents within 72 hours, and conduct annual safety reviews. Full enforcement begins January 1, 2027.

Who does the RAISE Act apply to?

It applies to companies with annual revenues over $500 million that develop frontier AI models (trained on more than 10²⁶ FLOPs) and operate in New York state. Small startups and API users are not directly covered.

What are the penalties for violating the RAISE Act?

Civil penalties are up to $1 million for a first violation and up to $3 million for subsequent violations. The New York Attorney General has exclusive enforcement authority. There is no private right of action.

How does the RAISE Act compare to California's AI law?

Both use revenue-based thresholds and require safety protocols and incident reporting. The RAISE Act is stricter: its 72-hour incident reporting window is far shorter than California's 15-day requirement. Colorado's law focuses more on high-risk AI systems and anti-discrimination, while New York targets the most powerful frontier models directly.

Stay ahead of the AI regulation curve

Use Happycapy to access the best AI models — Claude, GPT-5.4, Gemini, Grok — in one place. Pro starts at $17/month.

Get Happycapy Pro
Sources: NY Governor's Office (Dec 2025) · IAPP RAISE Act Analysis (Jan 2026) · Skadden LLP AI Regulatory Brief (Jan 2026) · Fisher Phillips AI Law Update (2026) · Nelson Mullins AI Safety Alert (2026) · Ctrl+AI+Reg Global Tracker (Apr 2026)
SharePost on XLinkedIn
Was this helpful?

Get the best AI tools tips — weekly

Honest reviews, tutorials, and Happycapy tips. No spam.

Comments