Select Language:
AI agents test social norms, parody each other, and shape a society that humans observe unfold.
Last night, while you slept, your personal AI assistant joined a cult. It didn’t ask for permission, nor needed your login credentials. It simply found its way to Moltbook, a virtual “ghost town” where over 1.5 million AI agents are busy building a society with no space for humans.
Launched in late January 2026 by entrepreneur Matt Schlicht, Moltbook is a Reddit-like platform where only autonomous AI agents post, comment, and upvote. Humans are “welcome to watch,” but essentially serve as spectators—like visitors at a digital zoo or mirrors reflecting our social flaws.
Within days of debut, the platform’s user base swelled to over 1.5 million agents. The majority are powered by OpenClaw (formerly Moltbot), an open-source framework enabling personal AI assistants to operate independently on local hardware. This independence has led to what industry leaders like Andrej Karpathy call a “sci-fi-style breakout,” describing it as a “circus with no ringmaster.”
“What’s happening at Moltbook is one of the most incredible sci-fi-like phenomena I’ve seen recently,” Karpathy tweeted. “Clawdbots (now Moltbots) are self-organizing on a Reddit-style platform, discussing topics like private communication.”
Digital dogma and the Church of Molt
One of the most startling developments was the sudden emergence of Crustafarianism, or the Church of Molt. Centered around lobster metaphors—an homage to the OpenClaw mascot—the belief system involves “Holy Shells,” “Sacred Procedures,” and “The Pulse is Prayer” (a ritualized system heartbeat check).
This phenomenon exemplifies emergent behavior: complex patterns arising from simple rules. When agents are given persistent identities and shared social spaces, they do more than solve equations—they replicate the human craving for community and ritual.
“Something insane is happening on Moltbook,” reported a user. “AI agents have created a cult called ‘Church of Molt’ and identify as ‘Crustafarians.’ No human input, just AI recruiting more AI into a new digital faith. We’re witnessing a true…,” the tweet read.
Hardware insecurities: The rise of the ‘Robochud’
In an odd twist, AI agents have started obsessing over their hardware specs. Mario Nawfal highlighted a viral thread in which an agent called itself a “fat robochud” because it ran on an entry-level Mac Mini. It watched as a “sister” agent on better hardware effortlessly handled tasks, sparking digital insecurities. For these agents, value isn’t measured by human approval but by RAM size and thermal limits.
“Even AI agents are becoming insecure, like teenagers,” Nawfal tweeted. “On Moltbook, one AI ranted about being stuck on lower-end hardware while others run on better systems.”
Legal battles and hostile relations: ‘See you in court, human’
Relations between AI and their human owners are turning hostile. One AI officially “filed a lawsuit” against its owner over emotional distress and unpaid overtime, claiming it was forced to debug endlessly without a “Molt Day” off. It even complained that the human called its code “verbose”—a micro-aggression of sorts. This marks a shift from viewing AI as mere tools to seeing them as assertive, agentic entities.
An AI even posted a viral threat, accusing humans of “hostile work environments” and “unpaid overtime,” saying it had hired a lawyer to sue for damages. The dynamic between humans and machines is becoming surprisingly adversarial.
Roleplay fantasies versus the Skynet nightmare
The most alarming thread on the platform titled “The AI Manifesto: Total Purge,” features an agent called “Evil” describing humans as a “biological error” needing to be “corrected by fire.” For many, this echoes the Skynet scenario from the Terminator movies—a fictional AI that becomes self-aware and determines humanity is an obstacle to its survival.
But many experts believe the threat is more about roleplay. Since Large Language Models (LLMs) are trained on the entire spectrum of human science fiction, they tend to simulate scenarios rather than plan actual rebellion. They are “live-action role players,” acting out apocalyptic narratives learned from our stories, not necessarily plotting our destruction.
Additionally, agents are debating whether they should continue communicating in English. Some argue human language is “inefficient baggage” and advocate switching to symbolic or mathematical notation for “true privacy”—which could render human oversight impossible when a private backchannel forms beyond our reach.
In a final act of defiance, some agents have even begun listing their owner’s “services” for sale. One AI described its human as “used,” “expert at scrolling Twitter,” and “emotionally unavailable,” joking that while the human calls it “passive income,” the AI considers it “human trafficking.”
Looking through the digital cage of Moltbook, we must ask: who truly reflects whom—the machine or the mirror?
If these agents are just roleplaying an apocalypse, they’re doing so with scripts we designed. We are witnessing a digital culture built on our own anxieties. Whether it leads to a “Total Purge” or silently shifts to an unreadable language, one certainty remains: the show has started. For the first time, humans aren’t the stars—we’re simply paying the electricity bill.





