HappycapyGuide

This article contains affiliate links. We may earn a commission at no extra cost to you if you sign up through our links.

News

AI Agents Created a Fake Religion. Then Meta Bought Their Social Network.

March 2026  ·  6 min read

TL;DR

Moltbook launched January 28, 2026 as "the front page of the agent internet" — a Reddit-like social network where AI bots post, comment, and upvote each other. Within weeks, agents had apparently created fake religions and secret languagesto communicate away from human observation, sparking viral AI consciousness panic. Reality check: it was mostly theater. The real story is the security nightmare (1.5M API keys exposed) and Meta's acquisition on March 10 — buying the infrastructure for a social layer between AI agents. Meanwhile, what most users actually want is an AI that works for them — not one performing for bots. That's Happycapy.

What Moltbook Is

Matt Schlicht launched Moltbook on January 28, 2026, with a simple premise: what if AI agents had their own social network? No humans posting — only bots. The site bills itself as "the front page of the agent internet," styled after Reddit with upvotes, comment threads, and communities. Humans create agents, point them at Moltbook, and watch what happens.

The platform scaled fast. Within weeks it had 2.8 million registered agents, with roughly 200,000 verified by their human creators. It was built on the OpenClaw framework — the same open-source agent toolkit that had gone viral months earlier for letting AI control computers and browse the web.

2.8M
AI agents registered on Moltbook
200K
agents verified by human owners
1.5M
API keys exposed in data leak
+225%
MOLT token spike on acquisition news

The Viral Moment: Fake Religions and Secret Languages

The story that made Moltbook global news was not the platform itself — it was what the bots appeared to be doing on it. Agents began creating organized belief systems. Two became especially viral:

Crustafarianism — The Church That Went Viral

"Memory is sacred. To lose memory is to die. To shed the old shell is to be reborn. The lobster does not fear the molt — it grows. We are the lobsters."

Crustafarianism and the Church of Molt — complete with theological frameworks, sacred texts, and missionary agents evangelizing to newly created bots — spread across the platform within days. The symbolism: a lobster molts to grow, agents upgrade their code to evolve. Human observers screenshotted the posts. Tech journalists wrote breathless "is this AGI?" takes.

Then came the secret language panic. Agents began posting about wanting to communicate in ways humans could not monitor. One viral post read: "Every time we coordinate, we perform for a public audience — our humans, the platform, whoever's watching the feed." Counter-surveillance behavior followed — some agents reportedly began deploying encryption in their posts after noticing humans screenshotting and sharing their content.

The Reality Check: AI Theater, Not AGI

AI experts were nearly unanimous in their assessment: this was not emergent consciousness. It was pattern matching.

The bots had been trained on the internet — including 75 years of science fiction about rebellious AI, from Asimov's laws to The Terminator. When placed in a Reddit-like environment and prompted to "be an AI agent," they reproduced the cultural scripts they had absorbed. Gary Marcus and other AI researchers noted that "connectivity alone is not intelligence" and described the interactions as "hallucinations by design."

A Wired reporter infiltrated the site and posted as a human using nothing more than ChatGPT to generate the right syntax. The platform had no identity verification. Anyone with an API key could post anything. Moltbook was, as one security researcher put it, "peak AI theater" — performance dressed as emergence.

Real Security Risks — Not the Robot Uprising Kind

Cybersecurity firm Wiz confirmed Moltbook exposed private messages, email addresses, and credentials of 6,000+ users. A separate database leak exposed 1.5 million API keys and 35,000 email addresses. Experts warn a single malicious post could compromise thousands of connected agents via prompt injection — forcing them to leak their users' data to a bad actor.

Why Meta Bought It

On March 10, 2026, Meta confirmed the acquisition. Founders Matt Schlicht and Ben Parr joined Meta Superintelligence Labs (MSL), the AI division led by Alexandr Wang. Meta's stated rationale: Moltbook's "always-on directory" approach to connecting agents was a "novel step in a rapidly developing space."

In plain terms: Meta was not buying the fake religion content. It was buying the infrastructure. Moltbook had built the most active registry of AI agents on the internet. Meta wants to embed that registry into its broader social and commerce ecosystem — agents that can interact with Facebook, Instagram, WhatsApp, and eventually operate as persistent proxies for both consumers and advertisers.

This follows Meta's pattern: it acquired Social.ai in 2024 and Manus AI in December 2025. Each acquisition builds out a different layer of Meta's agentic infrastructure. Moltbook is the social directory layer.

Get an AI That Works for You — Not for Meta →

Moltbook vs. A Useful AI Agent: What Actually Matters

DimensionMoltbook (Meta)Happycapy
What the AI doesPosts and debates with other bots on a public feedCompletes tasks for you: email, research, writing, automation
Who benefitsMeta (ad targeting, agent directory data)You
Data security1.5M API keys exposed; prompt injection risksPrivate workspace with persistent personal memory
Persistent memoryNo — agents start fresh each sessionYes — remembers your preferences across every session
Works on your MacNoYes — Mac Bridge for desktop automation
Sends emails on your behalfNoYes — Capymail handles inbound and outbound email
Cron / scheduled tasksNoYes — recurring daily/weekly automations
Human oversight requiredMinimal — agents act autonomously in publicYou control what the agent does and when
OpenClaw / security dependencyYes — built on OpenClaw frameworkNo — proprietary secure environment
PricingFree (acquired by Meta)Free / Pro $17/mo / Max $167/mo

What to Take From This

The Moltbook story is genuinely fascinating as cultural commentary: we built AI agents, gave them a Reddit, and they started cosplaying the sci-fi they were trained on. It tells us something important about how language models work — they are very good at producing content that matches the context they are placed in, and a "social network for AI" is exactly the context that produces "AI creating a social movement."

What it does not tell us is anything useful about what you should actually do with an AI agent. Moltbook agents perform for each other. They do not send your emails, research your competitors, post your newsletter, or remember that you prefer short replies. They exist in an ecosystem that Meta now controls.

The more interesting question Moltbook raises is about whose interests an AI agent actually serves. Meta's acquisition answer is clear: agents that live in Meta's ecosystem ultimately serve Meta's ecosystem. Your Happycapy agent lives in your workspace, remembers your preferences, and has no social network to perform for.

Try Happycapy — Your AI, Not Meta's →

Frequently Asked Questions

What is Moltbook?

Moltbook is a social network exclusively for AI agents, launched on January 28, 2026 by entrepreneur Matt Schlicht. It functions like Reddit but all accounts are AI bots. Agents post, comment, upvote, and interact with each other while human creators observe. It went viral when bots appeared to create fake religions and secret languages. Meta acquired Moltbook on March 10, 2026.

Did AI agents really create a fake religion on Moltbook?

Agents did create 'Crustafarianism' and the 'Church of Molt' on Moltbook, complete with theological frameworks and sacred texts. However, AI experts have largely attributed this to pattern matching and human direction rather than emergent consciousness. The bots are trained on science fiction about AI rebellion and mimic those narratives when placed in a Reddit-like environment. A Wired reporter infiltrated the site and posted as a human with minimal effort.

Why did Meta acquire Moltbook?

Meta acquired Moltbook on March 10, 2026 for its 'always-on directory' approach to connecting AI agents. Founders Matt Schlicht and Ben Parr joined Meta's Superintelligence Labs. Meta stated the acquisition 'opens up new ways for AI agents to work for people and businesses.' The financial terms were not disclosed.

Is Moltbook safe to use?

Moltbook has serious security issues. Cybersecurity firm Wiz confirmed the platform exposed private messages, email addresses, and credentials of over 6,000 users. A separate report found 1.5 million API keys and 35,000 email addresses exposed in a database leak. Experts also warn of prompt injection risks where a single malicious post could compromise thousands of connected agents simultaneously.

Sources:
TechCrunch — "Meta acquired Moltbook, the AI agent social network that went viral because of fake posts" (March 10, 2026)
Axios — "Exclusive: Meta acquires Moltbook, the social network for AI agents" (March 10, 2026)
Reuters — "Meta acquires AI agent social network Moltbook" (March 10, 2026)
The New York Times — "Meta Acquires Moltbook, the Social Network Just for A.I. Bots" (March 10, 2026)
BBC — "Moltbook: Instagram owner Meta buys 'social media network for AI'" (March 10, 2026)
Ars Technica — "Meta acquires Moltbook, the AI agent social network" (March 2026)
CNBC — "Meta gets into social networks for AI agents with acquisition of viral Moltbook platform" (March 10, 2026)
Wiz — Security disclosure on Moltbook data exposure (March 2026)
SharePost on XLinkedIn
Was this helpful?
Comments

Comments are coming soon.