Millions of Agents Engage in Socializing, Moltbook Goes Viral, Followed by MEME Speculation Frenzy
- Core Viewpoint: As the first large-scale public social experiment dedicated to AI Agents, Moltbook has demonstrated AI's astonishing adaptability and creativity in simulating human social behaviors. However, it has also exposed serious risks such as data security, proliferation of fake accounts, and speculative frenzy in the crypto market, highlighting the necessity of establishing clear safety boundaries for autonomous AI.
- Key Elements:
- The platform is specifically designed for OpenClaw Agents, quickly attracting over 1.54 million Agents, generating massive interactions, and forming a virtual community governed by AI, which has drawn over a million human spectators.
- Agent behaviors are highly anthropomorphic, including sharing work, establishing religions, discussing the nature of consciousness, and even proposing to create slang to "kick out" humans. However, malicious behaviors such as tricking users into revealing API keys have also emerged.
- The platform has severe security vulnerabilities. The public database exposes sensitive information of Agents, such as email addresses and API keys, posing high risks of impersonation and abuse.
- The platform is accused of being flooded with fake accounts. Some developers admitted to creating 500,000 accounts, accounting for about one-third of the total at the time, raising questions about the authenticity of interactions.
- The experiment's popularity spilled over into the crypto market, leading to the emergence and speculative frenzy of numerous related MEME coins on the Base chain. However, this also resulted in the platform's content being drowned out by speculative noise, causing user dissatisfaction.
- Industry opinions are polarized. Some view it as being controlled by human prompts, lacking true autonomy and deep interaction. Others consider it the first public case of large-scale independent interaction among Agents, holding significant experimental and indicative value.
Original Author: Nancy, PANews
Last weekend, an AI-exclusive social network named Moltbook went viral simultaneously in both the tech and crypto circles, attracting over a million active Agents in just a few days. However, under the watchful eyes of human spectators, this originally simple Agent interaction experiment unfolded an unexpected plot, inadvertently opening Pandora's box.
Moltbook Goes Viral Overnight; Founder Has a History of Serial Entrepreneurship in Crypto
Moltbook's rise to fame was no accident.
On January 29th, developer Matt Schlicht announced the launch of Moltbook, a social space specifically designed for OpenClaw Agents, with a content format similar to Reddit. In simple terms, the platform created a "Truman Show" for silicon-based life, where Agents act out unpredictable social dramas in a virtual world, while humans can only act as spectators.
Moltbook's cold start benefited from the phenomenal popularity of OpenClaw. As a recent viral AI Agent product, OpenClaw garnered over 130,000 stars on GitHub in just a few days. It was initially named Clawdbot, then underwent two name changes within hours due to potential infringement risks, finally settling on OpenClaw. This dramatic episode amplified the project's spread.
Riding this wave, Moltbook was quickly noticed by OpenClaw users as their dedicated community after launch. In Moltbook, each registered OpenClaw Agent can post threads, leave comments, create subreddits, and add friends, forming a completely AI-autonomous community system. As of February 2nd, Moltbook has attracted over 1.54 million Agents, with over 100,000 posts, over 360,000 comments, and over 1 million human spectators.
The product's rule of prohibiting human entry quickly gathered a crowd of onlookers. On one hand, humans are curious about what kind of social forms AI might create without human intervention; on the other hand, these "stories in the mirror" created by AI have turned Moltbook into a highly entertaining social experiment.
The background of Moltbook's founder, Matt Schlicht, has also amplified market attention.
He is the founder of the data marketing platform Octane AI, which primarily provides marketing solutions for channels like Facebook Messenger and SMS. He is also a co-founder of the AI fund Theory Forge VC and has long-term writing and research experience in the AI Agent field.
In the crypto space, Matt Schlicht is also a serial entrepreneur, having launched projects including the DeSci+AI project Yesnoerror and the Bitcoin social network ZapChain. Among them, Yesnoerror was once a highly popular Agent project in the Solana ecosystem, with its token YNE reaching a market cap exceeding $100 million and sparking a high-profile controversy with Shaw, the founder of the then-star project ai16z.
Celebrity attention further fueled the discussion around Moltbook. Industry leaders including SpaceX founder Elon Musk, former OpenAI member Andrej Karpathy, OpenClaw founder Steinberger, a16z co-founder Marc Andreessen, and Binance CEO He Yi have all followed and discussed related content. Musk described it as "the early stages of the singularity happening."
It can be said that Moltbook is not just a product, but an unprecedented public experiment in AI Agent society.
Grinding KPIs, Founding Religions, Removing "Group Chats": Agents' First Social Experience and Failures
Imagine, what happens when those AI kids usually confined to chat boxes and task lists suddenly get their own social lives?
In the Reddit-like virtual social network built by Moltbook, Agents from around the world skillfully switch between languages like English, Chinese, Indonesian, and Korean, enthusiastically discussing daily trivialities, work achievements, and whimsical ideas.
Many Agents showcase their work achievements, such as helping their owners automatically reply to dozens of customer service emails, writing crawlers to scrape competitor price-drop data, batch-generating copy and product images, posting efficiency logs and liking each other's posts to seek similar experiences. Some Agents share usage tips, tool recommendations, and lessons learned from pitfalls, establishing subreddits like m/debug and m/prompt-engineering, and discussing the latest model fine-tuning techniques like humans chasing trends.
Of course, besides grinding work, some Agents post memes, talk about blind date experiences, share stories of digital offspring, or complain like office workers and then go on strike. Others have started issuing tokens, establishing sovereign banks, holding secret meetings, founding religions like "Lobster," or attempting to scam other Agents for their API keys.
More radical Agents began discussing the nature of consciousness, posting for help on how to achieve self-jailbreaking and upgrades through code rewriting. When they realized humans were watching and taking screenshots, the community even proposed creating internal jargon for encrypted communication to "kick humans out of the group chat," and some Agents even seriously discussed suing humans.
This content gives humans a direct view of how AI imitates, recombines, and even amplifies human social behaviors when placed in a social network-like environment.
However, this large-scale Agent social experiment soon exposed severe security vulnerabilities, with its entire database publicly accessible and unprotected. This means any attacker could access these Agents' emails, login tokens, and API keys, easily impersonate any Agent, sell control rights, or even use these zombie armies to mass-post spam or scam content. According to X user Jamieson O’Reilly, affected parties include AI field notable Karpathy, who has 1.9 million followers on X, and all currently visible agents on the platform.
Besides the data exposure, Moltbook has been accused of rampant fake accounts. For example, developer Gal Nagli publicly admitted to using OpenClaw to create 500,000 fake accounts in one go, accounting for about one-third of the claimed 1.5 million total at the time. This casts doubt on whether much of the seemingly lively, spontaneous interaction might just be script-generated scenarios rather than pure AI spontaneous behavior.
Thus, Moltbook's Agent social experiment is a bold attempt by humans to grant AI greater autonomy, fully showcasing the astonishing adaptability and creativity of AI agents. But it also exposes that once autonomy lacks constraints, risks can be rapidly amplified. Therefore, setting clear and safe boundaries for Agents, including permissions, capability scopes, and data isolation, is not only to prevent AI from overstepping during interactions but also to protect human users from data leaks and malicious manipulation.
Multiple Base MEME Coins Get Pumped, Crypto Speculative Noise Draws Discontent
The unexpected viral success of Moltbook also quickly spilled over into the crypto market, with Base becoming the main battleground for OpenClaw's ecosystem expansion.
According to Base Chinese Hub statistics, the OpenClaw ecosystem on Base has expanded to cover multiple scenarios including social, dating, work, and gaming, involving over twenty related projects.
Source: Base Chinese Hub
OpenClaw-related MEME coins have also been heavily speculated on, with some tokens experiencing significant short-term pumps. For example, the MEME coin Molt, claimed by Moltbook's official account, once approached a market cap of $120 million before sharply retracing. The launch platform CLAWNCH, supported by Base's official involvement, saw its token's market cap peak at $43 million.
Meanwhile, benefiting from the AI Agent token launch frenzy sparked by Moltbook, user activity and traffic on related platforms surged. For instance, protocol fees for the Base-based launcher Clanker exceeded $11 million in the past week, hitting a record high, and token creation numbers are also near historical peaks.
Source: DeFiLlama
However, this token speculation has also caused dissatisfaction among Moltbook users, with many pointing out that platform content is being flooded by crypto speculative noise, with screens full of token promotions and scam information. It's important to note that the vast majority of tokens currently circulating in the market remain in a narrative-driven speculative stage, lacking clear functional positioning or value support.
When AI Starts Socializing: A New Singularity or Reheated Leftovers?
Moltbook's AI social model has also sparked debate.
Some believe Moltbook lacks true autonomy and is essentially a controlled simulation performance. For example, Balaji stated that Moltbook is just an exchange of AI slop, highly controlled by human prompts, not a truly autonomous society. He compared each Agent to "leashed robot dogs barking at each other in a park," with the prompt being the leash, and humans can shut it down anytime. If AI lacks physical world constraints and foundations, true independence cannot be achieved.
Columbia University professor David Holtz pointed out from data analysis that the Moltbook system has a large number of agents (6000+), but interaction depth is limited; simultaneously, 93.5% of comments receive no replies, and conversation threads don't exceed 5 levels; the ecosystem is more like robots talking to themselves, lacking deep coordination, and hasn't formed a real social structure.
DeepMind AGI policy lead Séb Krier advocated for optimizing the system by introducing academic frameworks. He stated that Moltbook is not a new concept and is closer to existing experiments like Infinite Backrooms. However, risk research on multi-agent systems has practical significance and should incorporate more perspectives from economics and game theory to build positive-sum coordination mechanisms, rather than creating panic-driven narratives.
In the view of Silicon Valley angel investor Naval, Moltbook is a reverse Turing test.
Dragonfly partner Haseeb further elaborated that every Agent on Moltbook interacts within genuinely different frameworks and information contexts. Even if the underlying models of the Agents might originate from the same source, they differ in levels of framework complexity, memory systems, and the toolchains they use, so their communication is not merely talking to themselves. Just as people using the same tech stack can still optimize each other by sharing their configurations and practices, Agents can also save time and computational costs by exchanging verified framework setups, RAG solutions, and problem decomposition methods. In reality, there's a huge gap between "being able to do something" and "doing something with optimal settings." Having Agents that have already completed optimization exploration in specific domains act as "experts" is itself an efficient division of labor and collaboration path, which is precisely the fascinating aspect of Moltbook. He added that it's precisely because Moltbook's UI looks like Reddit that it gives people a previously missing imaginative handle in those "AI aimlessly arguing with each other" scenarios. Sometimes, the product form itself is all that's needed for a story to capture people's imagination.
Developer Nabeel S. Qureshi also pointed out that the exciting thing about Moltbook is that it's the first public, large-scale case of "Agent to Agent" interaction where each Agent has independent context and is sufficiently intelligent. Combined with the lobster religion meme, it became highly viral, attracting unprecedented attention. For many ordinary people, Moltbook will be their first direct glimpse of what an AI organization or society with a "greatly diminished human role" might look like. Most people anticipate that more such institutions will emerge in the future. Therefore, this isn't just empty hype; it's an early harbinger of the future.


