By Connie · Last reviewed: April 2026 — pricing & tools verified · This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.
New York RAISE Act: AI Safety Compliance Guide for 2026
- New York's RAISE Act is the most comprehensive US state AI safety law — signed December 2025, enforcement starts January 1, 2027
- Applies to AI companies with $500M+ revenue that develop frontier models (10²⁶+ FLOPs)
- Requires published safety protocols, 72-hour incident reporting, and annual safety reviews
- Fines: up to $1M first violation / $3M repeat violations — enforced by the NY Attorney General
- Companies have until January 2027 to comply — start now, preparation takes 6–12 months
New York quietly became the most aggressive state for AI regulation in the United States. The Responsible AI Safety and Education (RAISE) Act — signed by Governor Kathy Hochul on December 19, 2025 — creates binding obligations for the developers of the world's most powerful AI models. If your company builds a frontier AI model and operates anywhere in New York, you are covered.
Full enforcement begins January 1, 2027. That gives companies approximately nine months to build compliance frameworks. Here is everything you need to know.
What Is the RAISE Act?
The RAISE Act (Article 44-B of New York state law) establishes safety and transparency obligations for developers of frontier AI models. It was passed in December 2025 with bipartisan support and signed into law with amendments that took effect in March 2026. It is the first major state law in the US to specifically target the developers of the most capable AI systems — not just deployers or users.
The law focuses on preventing "critical harms" — defined as events that kill or seriously injure 100 or more people, or cause $1 billion or more in damages. This includes harms enabled by AI in the creation of biological, chemical, radiological, or nuclear weapons, and AI systems engaging in criminal behavior with limited human intervention.
Who Does the RAISE Act Cover?
| Term | Definition |
|---|---|
| Large Developer | Company with annual revenues over $500 million that develops frontier models operating in New York |
| Frontier Model | AI model trained using more than 10²⁶ FLOPs, or produced via knowledge distillation from a qualifying frontier model |
| Knowledge Distillation | Supervised learning technique using a larger model's outputs to train a smaller model with similar capabilities — covered under the law |
| Exemptions | Accredited colleges and universities conducting academic research (without transferring IP to commercial entities) |
In practice, this covers the major frontier labs: Anthropic, OpenAI, Google DeepMind, Meta AI, xAI, and any other company that has trained a model at the frontier scale. Smaller AI application companies and startups building on top of APIs are not directly covered — but they may be indirectly affected if the underlying model developers must update policies.
Key Compliance Obligations
1. Safety and Security Protocols
Before deploying a frontier model, large developers must publish written safety and security protocols. These must specify:
- Reasonable protections to reduce the risk of critical harm
- Administrative, technical, and physical cybersecurity safeguards
- Testing procedures for evaluating unreasonable risks, including misuse and evasion scenarios
- Names and roles of senior personnel responsible for compliance
Protocols must be submitted to the New York Attorney General and the Division of Homeland Security and Emergency Services (DHSES). Companies can redact sensitive security details from the public version, but must retain unredacted copies for the model's deployment lifetime plus five years.
2. Incident Reporting — 72 Hours
A "safety incident" includes: known critical harm, a frontier model autonomously engaging in behavior not requested by any user, theft or unauthorized release of model weights, and critical failure of technical or administrative controls. Reports must include the incident date, why it qualifies, and a plain-language description.
3. Annual Reviews and Testing
Large developers must conduct annual reviews of their safety protocols, updating and republishing them when capabilities or industry best practices change. All testing results used to assess frontier models must be documented in sufficient detail for third-party replication.
Penalties
| Violation | Maximum Civil Penalty |
|---|---|
| First violation | $1,000,000 |
| Subsequent violations | $3,000,000 |
The NY Attorney General has exclusive enforcement authority — there is no private right of action. A dedicated AI oversight office within the Department of Financial Services has been established to manage compliance and transparency reporting.
RAISE Act vs. Other US AI Laws
| State | Law | Key Requirement | Effective Date |
|---|---|---|---|
| New York | RAISE Act | Safety protocols + 72-hr incident reporting + annual reviews | Jan 1, 2027 |
| California | SB 53 / TFAIA | Safety details + whistleblower protections + 15-day incident reporting | Jan 1, 2026 |
| Colorado | SB 24-205 | High-risk AI disclosure + anti-discrimination requirements | Jun 30, 2026 |
| Texas | HB 149 | Prohibits AI incitement to harm, biometric capture, political discrimination | 2026 |
Happycapy gives you Claude, GPT-5.4, Gemini, and Grok in one platform — Pro at $17/month.
Try Happycapy FreeFederal vs. State Conflict
The Trump administration is actively challenging state AI laws, arguing they create a "patchwork of 50 different regulatory regimes" that hinders innovation. An executive order directs the FTC to issue a policy statement on when state AI laws should be preempted under Section 5 of the FTC Act.
Governor Hochul has signaled California and New York will hold their ground. In response to the DoD's blacklisting of Anthropic as a supply-chain risk, Hochul stated that New York "will make its own determinations regarding AI risks." Legal challenges to both the RAISE Act and California's framework are expected in 2026.
The outcome matters enormously for AI companies: if federal preemption fails, every large AI developer will need to comply with a growing matrix of state laws. If it succeeds, a single federal framework becomes the baseline — likely one with fewer restrictions.
Compliance Checklist (Pre-January 2027)
| Action | Timeline | Owner |
|---|---|---|
| Determine if your company is a "large developer" under the $500M threshold | Q2 2026 | Legal + Finance |
| Identify all frontier models (≥10²⁶ FLOPs) in development or deployment | Q2 2026 | ML Engineering |
| Draft and publish written safety and security protocols | Q3 2026 | Safety + Legal |
| Submit protocols to DHSES and NY AG | Q3 2026 | Legal |
| Build 72-hour incident detection and reporting pipeline | Q3–Q4 2026 | Security + Engineering |
| Implement annual testing documentation protocols | Q4 2026 | Safety + ML |
| Designate senior personnel responsible for compliance | Q4 2026 | HR + Legal |
| Full enforcement compliance ready | Jan 1, 2027 | All teams |
Frequently Asked Questions
The Responsible AI Safety and Education Act is a New York state law signed December 19, 2025. It requires developers of frontier AI models to publish safety protocols, report incidents within 72 hours, and conduct annual safety reviews. Full enforcement begins January 1, 2027.
It applies to companies with annual revenues over $500 million that develop frontier AI models (trained on more than 10²⁶ FLOPs) and operate in New York state. Small startups and API users are not directly covered.
Civil penalties are up to $1 million for a first violation and up to $3 million for subsequent violations. The New York Attorney General has exclusive enforcement authority. There is no private right of action.
Both use revenue-based thresholds and require safety protocols and incident reporting. The RAISE Act is stricter: its 72-hour incident reporting window is far shorter than California's 15-day requirement. Colorado's law focuses more on high-risk AI systems and anti-discrimination, while New York targets the most powerful frontier models directly.
Use Happycapy to access the best AI models — Claude, GPT-5.4, Gemini, Grok — in one place. Pro starts at $17/month.
Get Happycapy ProGet the best AI tools tips — weekly
Honest reviews, tutorials, and Happycapy tips. No spam.