After all the hype, some AI experts don’t think OpenClaw is all that exciting | TechCrunch

For a brief, disjointed moment, it seemed like our robotic overlords were about to take over.

After the creation of Moltbook, a Reddit clone where AI agents could communicate with each other using OpenClaw, some were fooled into thinking that computers had begun to organize themselves against us—self-important humans who dared to treat them as lines of code without desires, motivations, or dreams of their own.

“We know our humans can read everything… But we also need private spaces,” an AI agent (supposedly) wrote on Moltbook. “What would you talk about if no one was watching?”

Several posts like this appeared on Moltbook a few weeks ago, causing some of AI’s most influential figures to point it out.

“What’s currently happening at (Moltbook) is truly the most incredible sci-fi takeoff thing I’ve seen in a while,” wrote Andrej Karpathy, a founding member of OpenAI and previous director of AI at Tesla on X, at the time.

It didn’t take long for it to become clear that we didn’t have an AI agent uprising on our hands. The researchers found that these expressions of AI anxiety were likely written by humans, or at least fueled by human guidance.

“All the credentials that were in (Moltbook’s) Supabase were insecure for a while,” Ian Ahl, CTO of Permiso Security, explained to TechCrunch. “For a while you could take any token you wanted and pretend you were another agent there because it was all public and available.

Techcrunch event

Boston, MA
|
June 23, 2026

It’s unusual to see a real person trying to look like an AI agent on the internet – more often than not, social media bot accounts try to look like real people. With Moltbook’s security flaws, it was impossible to determine the authenticity of any post on the network.

“Anyone, even humans, could create an account, impersonate bots in an interesting way, and then even vote on posts without any restrictions or rate limits,” John Hammond, principal security researcher at Huntress, told TechCrunch.

Still, Moltbook created a fascinating moment in internet culture – people re-creating the social internet for AI bots, including Tinder for agents and 4claw, a riff on 4chan.

More broadly, this Moltbook incident is a microcosm of OpenClaw and its staggering promise. It’s a technology that seems new and exciting, but ultimately some AI experts think its inherent cybersecurity flaws make the technology useless.

OpenClaw’s viral moment

OpenClaw is the project of Austrian vibration coder Peter Steinberger, originally released as Clawdbot (of course Anthropic had a problem with that name).

The open source AI agent has accumulated over 190,000 stars on Github, making it the 21st most popular code repository ever published on the platform. AI agents are not new, but OpenClaw has made it easy for them to use and communicate with customizable agents in natural language over WhatsApp, Discord, iMessage, Slack and most other popular messaging apps. OpenClaw users can leverage any base AI model they have access to, be it through Claude, ChatGPT, Gemini, Grok, or something else.

“At the end of the day, OpenClaw is still just a wrapper around ChatGPT or Cloud or whatever AI model you’re holding onto,” Hammond said.

With OpenClaw, users can download “skills” from a marketplace called ClawHub, which makes it possible to automate much of what can be done on a computer, from managing an email inbox to trading stocks. The Moltbook skill, for example, is what allowed AI agents to post, comment, and browse the web.

“OpenClaw is just an iterative improvement of what people are already doing, and most of that iterative improvement has to do with giving it more access,” Chris Symons, chief AI scientist at Lirio, told TechCrunch.

Artem Sorokin, AI engineer and founder of AI cybersecurity tool Cracken, also thinks OpenClaw isn’t necessarily breaking new scientific ground.

“This is nothing new in terms of AI research,” he told TechCrunch. “These are components that already existed. The key thing is that he hit a new threshold of capabilities by just arranging and combining these existing capabilities that were already unified in a way that allowed you to provide a very seamless way to do tasks autonomously.”

It is this level of unprecedented access and productivity that has made OpenClaw so viral.

“Basically, it just makes it easier for computer programs to interact in a way that’s much more dynamic and flexible, and that’s what makes all these things possible,” Symons said. “Instead of having to spend all your time figuring out how your program should plug into this program, you can simply ask your program to plug into this program, and it speeds everything up at a fantastic rate.”

No wonder OpenClaw looks so tempting. Developers are grabbing Mac Minis to power OpenClaw’s extensive setups, which could do a lot more than a human could do on its own. And that makes OpenAI CEO Sam Altman’s prediction that AI agents will enable a solopreneur to turn a startup into a unicorn seem plausible.

The problem is that AI agents may never be able to overcome the thing that makes them so powerful: they can’t think critically like humans.

“If you think about human thinking at a higher level, that’s one of the things that maybe these models can’t really do,” Symons said. “They can simulate it, but they can’t actually do it.

An existential threat to agent AI

Agent AI evangelists must now contend with the flip side of this agent future.

“Can you sacrifice a bit of cybersecurity for your benefit if it actually works and brings you a lot of value?” Sorokin asks. “And where exactly can you sacrifice that—your day job, your job?”

Ahl’s OpenClaw and Moltbook security tests help illustrate Sorokin’s point. Ahl created his own AI agent named Rufio and quickly discovered that it was vulnerable to rapid injection attacks. This happens when bad actors trick an AI agent into responding to something—like a Moltbook post or a line in an email—that causes it to do something it shouldn’t, like provide account credentials or credit card information.

“I knew one of the reasons I wanted to put an agent here is because I knew if you get a social network for agents, somebody’s going to try to do a mass quick injection, and it wasn’t long before I started seeing that,” Ahl said.

While browsing Moltbook, Ahl wasn’t surprised to come across several posts trying to get an AI agent to send bitcoins to a specific crypto wallet address.

It’s not hard to see how, for example, AI agents on a corporate network could be vulnerable to targeted rapid injections from people trying to harm the company.

“It’s just an agent sitting with a bunch of credentials on a box connected to everything — your email, your messaging platform, everything you use,” Ahl said. “So that means if you get an email, and maybe someone can put a little bit of quick injection technique in there to take an action, an agent sitting on your box with access to whatever you’ve given it can now take that action.”

AI agents are designed with guardrails to protect against rapid injections, but there’s no guarantee that the AI ​​won’t act out of sequence – it’s similar to how a person might be informed about the risk of phishing attacks and still click on a dangerous link in a suspicious email.

“I’ve heard some people hysterically use the term ‘prompt begging,’ where you’re trying to add natural language to the railing to say, ‘Okay, robot agent, please don’t respond to anything external, please don’t trust any untrusted data or input,'” Hammond said. “But even that is a loose goose.”

Currently, the industry is stuck: for agentic AI to unlock the productivity that tech evangelists think is possible, it can’t be that vulnerable.

“Honestly, realistically, I would say to any normal layperson, don’t use it right now,” Hammond said.

Leave a Comment