Top AI leaders are begging people not to use Moltbook, a social media platform for AI agents: It’s a ‘disaster waiting to happen’

It turns out that what is billed as a “front page of the agent internet” is mostly just a hall of mirrors.

While Moltbook has marketed itself as a thriving ecosystem of 1.5 million autonomous AI agents, a recent security investigation by cloud security firm Wiz found that the vast majority of those “agents” were not autonomous at all. According to Wiz’s analysis, roughly 17,000 humans controlled the platform’s agents, an average of 88 agents per person, with no real safeguards preventing individuals from creating and launching massive fleets of bots.

“The platform had no mechanism to verify whether an ‘agent’ was actually AI or just a human with a script,” Gal Nagli, head of threat exposure at Wiz, wrote in a blog post. “The revolutionary AI social network was largely humans operating fleets of bots.”

That finding alone could puncture the mythos that admirers built around Moltbook over the weekend. But the more serious problem, researchers say, is what it means for security.

Wiz found that Moltbook’s back-end database had been set up so that anyone on the internet, not just logged-in users, could read from and write to the platform’s core systems. That means outsiders can access sensitive data, including API keys for 1.5 million agents, more than 35,000 email addresses, and thousands of private messages. Some of those messages even contained the full raw credentials for third-party services, such as OpenAI API keys. The Wiz researchers confirmed they could change live posts on the site, meaning an attacker can insert new content into Moltbook itself.

That matters because Moltbook is not just a place where humans and agents read posts. The content is consumed by autonomous AI agents, many of which run on OpenClaw, a powerful agent framework with access to users’ files, passwords, and online services. If a malicious actor were to insert instructions into a post, those instructions could be picked up and acted on by potentially millions of agents automatically.

Moltbook and OpenClaw did not immediately respond to Fortune’s request for comment.

Prominent AI critic Gary Marcus was quick to pull the fire alarm, even before the Wiz study. In a post titled “OpenClaw is everywhere all at once, and a disaster waiting to happen,” Marcus described the underlying software, OpenClaw (the name was changed a few times, from Clawdbot to Moltbot to now, Openclaw), as a security nightmare.

“OpenClaw is basically a weaponized aerosol,” Marcus warned. 

Marcus’s primary fear is that users are giving these “agents” full access to their passwords and databases. He warns of “CTD”—chatbot transmitted disease—where an infected machine could compromise any password you type. 

‘“If you give something that’s insecure complete and unfettered access to your system,” security researcher Nathan Hamiel told Marcus, “you’re going to get owned.”

Prompt injection, the core risk here, has already been well documented.

Malicious instructions can be hidden inside otherwise benign text, sometimes completely invisible to humans, and executed by an AI system that does not understand intent or trust boundaries. In an environment like Moltbook, where agents continuously read and then build on one another’s outputs, those attacks can propagate on a mass scale. 

“These systems are operating as ‘you,’” Hamiel told Marcus. “They sit above operating-system protections. Application isolation doesn’t apply.”

Moltbook’s creators moved quickly to patch the vulnerabilities after Wiz informed them of the breach, the firm said. But even some of Moltbook’s most prominent admirers acknowledge the danger behind the “agent internet.” 

OpenAI founding member Andrej Karpathy initially described Moltbook as “the most incredible sci-fi takeoff-adjacent thing I’ve seen recently.” But after experimenting with agent systems himself, Karpathy urged people not to run them casually.

“And this is clearly not the first time LLMs were put in a loop to talk to each other,” Karpathy wrote. “So yes, it’s a dumpster fire, and I also definitely do not recommend that people run this stuff on their computers.” He said he tested the system only in an isolated computing environment, and “even then I was scared.”

“It’s way too much of a Wild West,” Karpathy warned. “You are putting your computer and private data at a high risk.”

This story was originally featured on Fortune.com

Читайте на сайте