By Connie · Last reviewed: April 2026 — pricing & tools verified · This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.
Meta's Next AI Models Are Being Built by the Scale AI Founder — And They'll Be Open Source
April 7, 2026 · 8 min read · Happycapy Guide
Who Is Alexandr Wang and Why Does This Matter?
Alexandr Wang founded Scale AI in 2016 at age 19. He built it into the dominant AI data labeling company — the infrastructure behind OpenAI, the U.S. Defense Department, and dozens of frontier labs. When Meta acquired Scale AI for approximately $15 billion in 2025, Wang became the head of Meta's internal Superintelligence unit: the team responsible for building the company's most advanced AI models.
This is significant because Wang is not a generalist executive. He spent a decade building the training data pipelines and evaluation systems that define model quality. His first models at Meta are expected to reflect that expertise — better calibrated, more reliably evaluated, and designed from the start for instruction-following rather than just benchmark performance.
For context: Meta's LLaMA 4 Maverick and Scout launched in early 2026 with impressive paper benchmarks but disappointed developers in practice. The instruction-following was unreliable, long-context performance was inconsistent, and the open-source community found the models hard to fine-tune effectively. Wang's team is building what comes next.
What We Know About the Models
Based on the Axios report and subsequent coverage, here is what is confirmed versus speculated:
| What We Know | Source | Confidence |
|---|---|---|
| Models are being developed under Alexandr Wang's Superintelligence unit | Axios (April 6, 2026) | Confirmed |
| Open-source versions of some models are planned | Axios (April 6, 2026) | Confirmed |
| The largest models will remain proprietary | Axios (April 6, 2026) | Confirmed |
| Internal delays and leadership disagreements over readiness | Multiple sources | Reported |
| Release window: mid-to-late 2026 | Inferred from timelines | Estimated |
| Model names / architecture details | — | Not yet disclosed |
The Open Source Strategy Explained
Meta's hybrid approach — open source for smaller models, proprietary for frontier — is a deliberate strategic choice, not a concession. Here is the logic:
- Developer ecosystem: Open models build an enormous community of fine-tuners, integrators, and toolbuilders — all of whom generate data, feedback, and goodwill that benefits Meta's larger closed models.
- Cost barrier reduction: Companies that cannot afford OpenAI or Anthropic API costs can run Meta's open models — expanding the total addressable market for AI-assisted software.
- Benchmark visibility: Open models get tested by the entire community, which surfaces quality issues faster and builds credibility when the models perform well.
- Counter-narrative: Every time Meta releases open-source models, it pressures OpenAI and Anthropic on pricing. That is an intentional competitive move.
- Safety risks: After DeepSeek demonstrated that open-weight models can be exploited or have safety guardrails stripped out, Meta became more cautious about fully releasing frontier-scale models.
- Competitive advantage: Meta needs proprietary frontier models to compete with GPT-5.4 and Claude Opus 4.6 in enterprise and consumer products (Meta AI, Ray-Ban glasses, Instagram AI).
- Revenue model: Closed frontier models can be monetized via API access — a revenue stream Meta is building to offset its massive infrastructure spend.
How Meta's Models Compare to Claude and GPT in 2026
| Model | Company | Open Source? | Best For | Access via Happycapy |
|---|---|---|---|---|
| Wang Models (upcoming) | Meta | Partial (small/mid) | TBD — expected coding + reasoning | Coming H2 2026 |
| Llama 4 Maverick | Meta | Yes | Cost-sensitive inference; self-hosting | Available now |
| Claude Opus 4.6 | Anthropic | No | Complex reasoning; writing; enterprise | Available now |
| GPT-5.4 | OpenAI | No | Tool use; structured output; broad tasks | Available now |
| Gemini 3.1 Pro | No | Multimodal; search; long context | Available now | |
| Happycapy Pro ($17/mo) — gives you access to all models above in one interface. When Meta's new Wang-built models launch, they'll appear here automatically. | ||||
What Happened to LLaMA 4?
LLaMA 4 Scout (109B MoE) and Maverick (400B MoE) launched in March 2026. The benchmark numbers were strong — Maverick scored 52.6% on ARC-AGI-2 and 91.7% on MATH-500. But developer feedback on both models was lukewarm.
The main complaints: inconsistent instruction-following across tasks, unreliable context handling beyond 64k tokens despite a claimed 1M context window, and difficulty fine-tuning compared to earlier LLaMA 3 models. The open-source community found the MoE architecture harder to work with than expected at typical inference hardware.
The underwhelming reception created urgency around the Wang team's models. Internal reports suggest leadership debates over whether to rush a release or hold until the models are genuinely competitive with Claude and GPT-5.4 at the frontier tier.
What This Means for Developers and AI Users
If Meta's upcoming models deliver on quality — and if the open-source versions are genuinely useful for fine-tuning and self-hosting — the competitive dynamics of the AI market shift significantly. Open-source Meta models have historically become the baseline for thousands of fine-tuned domain models: medical, legal, coding, multilingual. A stronger baseline means a stronger ecosystem.
For enterprise buyers, the closed frontier model from Meta would provide a new option outside the OpenAI/Anthropic duopoly. Meta's infrastructure scale means it can offer competitive pricing — and the Scale AI lineage gives it unique advantages in high-quality training data and evaluation methodology.
For AI platform users, the short-term answer is to stay on the platforms that will give you access the moment these models ship. Tools like Happycapy regularly add new frontier models as they launch — so you don't have to choose between Meta, Anthropic, and OpenAI.
Frequently Asked Questions
Who is Alexandr Wang and why is he building Meta's AI models?
Alexandr Wang founded Scale AI, the leading AI training data company. Meta acquired Scale AI for ~$15 billion in 2025. Wang now leads Meta's Superintelligence unit, responsible for its most advanced models. His first models are expected in H2 2026.
Will Meta's new models be open source?
Partially. Smaller and mid-sized models will be open-sourced. The largest, most capable models will remain proprietary — a deliberate shift from Meta's earlier strategy, driven by safety concerns (post-DeepSeek) and competitive revenue needs.
Why did LLaMA 4 disappoint developers?
LLaMA 4's benchmarks were competitive, but real-world instruction-following was unreliable, long-context performance was inconsistent, and the MoE architecture was harder to fine-tune than LLaMA 3. Wang's team is building a clean-slate architecture, not an incremental LLaMA 4 update.
When will Meta's new AI models release?
No official date. Based on the Axios report and internal delay signals, the most likely window is H2 2026 — possibly Q3 2026 if leadership decides quality is sufficient. The open-source versions may release before or alongside the proprietary frontier models.
Axios — Meta to open source versions of its next AI models (April 6, 2026) · Gizmodo — As Meta Flounders, It Reportedly Plans to Open Source Its New AI Models · The Decoder — Meta plans to open-source parts of its new AI models · Implicator AI — Meta to Open-Source New AI Models, Keep Largest Proprietary
Get the best AI tools tips — weekly
Honest reviews, tutorials, and Happycapy tips. No spam.