HappycapyGuide

By Connie · Last reviewed: April 2026 — pricing & tools verified · AI-assisted, human-edited · This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.

AI Policy · Analysis

OpenAI Rewrites Its Charter: The AGI 'Step-Aside' Clause Is Gone

By Connie · April 29, 2026 · 8 min read

TL;DR:On April 26, 2026, Sam Altman published “Our Principles,” OpenAI's first major charter rewrite since 2018. The document drops the famous “step aside” commitment (if a safer rival is close to AGI, we'll stop competing), reduces AGI mentions from 12 to 2, and trades “we commit” for “we believe.” Combined with the Microsoft “Clause” being replaced with a 2032 date, the pattern is unmistakable: OpenAI is leaving AGI-centric governance behind in favor of competitive deployment with case-by-case risk controls.

What actually changed

The 2018 OpenAI Charter was a famously idealistic document. It mentioned AGI twelve times, committed the company to “avoid enabling uses of AI or AGI that harm humanity,” and pledged that if “a value-aligned, safety-conscious project comes close to building AGI” before OpenAI, the company “commit[ted] to stop competing with and start assisting this project.”

The 2026 version, titled “Our Principles,” is built around five ideas: Democratization, Empowerment, Universal Prosperity, Resilience, and Adaptability. AGI is mentioned twice. The step-aside commitment is gone. And the language shifts from hard promises to softer framing.

2018 vs 2026, side-by-side

Dimension2018 Charter2026 Principles
AGI mentions122
Core framingEnsure safe AGI benefits all humanityDeploy AI broadly, correct for risks iteratively
Step-aside clausePresent — stop competing, assist rivalRemoved
Tone“We commit,” “we will”“We believe,” “we envision”
Governance anchorBoard and the CharterDemocratic processes + adaptability
Structural formNonprofit parent with capped-profit subsidiaryFor-profit with ongoing legal restructuring

The five new principles, in plain English

  • 1. Democratization.AI decisions shouldn't be made inside a handful of lab boardrooms. The new document frames public process and democratic input as the right governance layer.
  • 2. Empowerment.Users should get broad latitude to use AI for whatever they want, conditioned on harm prevention. This is the cleanest formulation yet of the “default open, restrict only for real harm” posture.
  • 3. Universal Prosperity. AI access is linked to infrastructure buildout — power generation, data centers, chips. OpenAI positions itself as a cloud company, energy customer, and national industrial asset.
  • 4. Resilience. Real risks (bioweapons, cybersecurity, critical infrastructure) are addressed head-on. This is the successor to AGI-risk framing — narrower, more concrete, more tractable.
  • 5. Adaptability. Explicitly reserves the right to restrict access to its technology when risks are too high. Less about never deploying something dangerous, more about having a retreat option if deployment goes wrong.
Care about which model you use — not which charter wrote it?

Happycapy gives you GPT-5, Claude Opus 4.7, Gemini 3 Pro, and 30+ other models on one account — so you can switch based on the task, not the governance document.

Try Happycapy Pro — $17/month

Why the step-aside clause had to go

The original step-aside clause was beautiful in theory and impossible in practice by 2026. Three reasons:

  • Who counts as “value-aligned”? The 2018 clause assumed a bright line between safety-conscious labs and others. By 2026, every frontier lab claims to be safety-conscious, and Anthropic's Mythos release is the strongest argument yet that OpenAI isn't the only credible safety-serious lab. A real step-aside would have had to trigger already.
  • Scale and obligations.OpenAI now has roughly 5,000 employees, tens of billions in revenue, hundreds of enterprise contracts, and multi-year compute agreements worth more than $100B. Unilaterally “stepping aside” is a legal and commercial impossibility — employees, investors, and customers all have binding claims on continued operation.
  • AGI itself is contested. In 2018, AGI was a future event that would be recognizable when it arrived. In 2026, every serious researcher has a different definition, and the debate over whether GPT-5 or Claude Opus 4.7 already qualifies is genuine. A charter clause triggered by a milestone nobody can define is operationally useless.

How this fits with the Microsoft “Clause” restructuring

The principles rewrite doesn't happen in isolation. A few weeks ago, OpenAI and Microsoft restructured their agreement. The original deal contained “The Clause” — a provision that let OpenAI pull back Microsoft's access to its technology once AGI was declared to be achieved. The new agreement replaces that AGI trigger with a 2032 date.

Taken together, the two moves are a clean exit from AGI-as-operational-concept:

  • AGI is no longer the trigger for governance actions (charter).
  • AGI is no longer the trigger for contractual rights (Microsoft).
  • Dated milestones and specific risk categories take over both roles.

This is the same direction Anthropic has been moving with its Responsible Scaling Policy (RSP), which also uses capability-tier triggers rather than an AGI binary. It's probably where the industry settles — operational decisions tied to measurable capabilities, not a contested singular event.

What changes for users?

Day-to-day, nothing. Your ChatGPT subscription, API contract, or Happycapy usage keeps working exactly the same. But three second-order effects are worth tracking:

  • Deployment velocity may increase. Without a governance framework that treats every capability jump as AGI-relevant, OpenAI can ship faster. Expect tighter release cadence through 2026-2027.
  • Capability restrictions become more situational. The Adaptability principle explicitly preserves the right to restrict — but only when a specific harm is identified, not because of abstract AGI-risk arguments. This means fewer blanket policy changes, more narrow scoped restrictions.
  • The “AGI achieved” headline risk drops.Under the old framework, the company had an incentive to never admit AGI was here, because doing so would trigger commitments. Under the new framework, there's no operational reason to avoid the word, which means you'll probably see OpenAI speak about AGI more casually, not less.

How this compares to Anthropic's approach

Anthropic's Responsible Scaling Policy is, in several ways, the document OpenAI's new principles are imitating:

FeatureOpenAI 2026 PrinciplesAnthropic RSP
AGI as triggerRemovedNever used — uses AI Safety Levels (ASL-1 through ASL-5)
Restrict-access optionReserved (Adaptability)Explicit — triggers at each ASL
Governance anchorDemocratic processesLong-Term Benefit Trust
Binding vs aspirationalAspirational (“we believe”)Binding (published commitments with review process)

The new OpenAI document is softer than the Anthropic RSP. That's an intentional design choice — OpenAI wants flexibility, Anthropic wants credibility. Both are defensible; they imply different enterprise sales stories and different risk postures.

Frequently asked questions

Is OpenAI still a nonprofit?

The structure is still being litigated. OpenAI retains a nonprofit parent, but the operational company is a for-profit — and the ongoing restructuring has been disputed in court. The 2026 principles effectively acknowledge this: they are written from the perspective of a for-profit company that retains a mission, not a nonprofit with a commercial arm.

Does this affect ChatGPT Plus or the API?

No. Pricing, features, and access all run on separate commercial documents. The principles are a governance-layer signal, not a product contract.

What does this mean for open-source AI?

The Democratization principle is the closest thing to an open-source gesture, but the 2026 document is silent on weights releases. Realistically, OpenAI is continuing its closed-weights strategy for frontier models, with smaller open-weight releases serving policy optics rather than technical leadership.

Could the step-aside clause ever come back?

Not in its original form. The conditions that made it coherent — a small lab, pre-revenue, pre-billion-user-product — don't exist anymore. A functional equivalent would have to be rebuilt around specific capability triggers and specific partner labs, which the new principles don't attempt.

Related reading

Sources
  • OpenAI — “Our Principles” (April 26, 2026)
  • Forbes — “OpenAI Publishes 5 Principles For Its AGI Push” (April 27, 2026)
  • Times of India — “Sam Altman has five new principles for OpenAI: Here's what's different from the 2018 charter” (April 2026)
  • OpenAI 2018 Charter — archived original text
  • Anthropic — Responsible Scaling Policy v2

← Back to all articles

SharePost on XLinkedIn
Was this helpful?

Get the best AI tools tips — weekly

Honest reviews, tutorials, and Happycapy tips. No spam.

You might also like

AI News

White House Drafts Guidance to Bypass Anthropic's Supply-Chain Risk Flag for Mythos

7 min

AI News

OpenAI's GPT-5.4-Cyber: The Cybersecurity-Only Frontier Model Kept Behind Closed Doors

8 min

AI News

YouTube Expands AI Likeness Detection to Celebrities: How the 2026 Deepfake Takedown System Works

8 min

AI News

Anthropic Now Requires Photo ID + Selfie to Use Claude: What It Means and How to Prepare

8 min

Comments