Now Reading: OpenClaw: The AI agent that’s got humans taking orders from bots

Loading
svg

OpenClaw: The AI agent that’s got humans taking orders from bots

NewsFebruary 6, 2026Artifice Prime
svg10

Well, that escalated quickly. 

I’m talking, of course, about OpenClaw (a.k.a. Moltbot a.k.a. Clawdbot), which not only represents a headlong rush into unchecked agentic AI, but also an emerging ecosystem that reads like every dystopian cautionary cyberpunk novel ever written. 

As my colleague and friend Steven Vaughan-Nichols detailed earlier this week, it’s a “security nightmare.” 

But the phenomenon goes far beyond the tens of thousands, possibly now hundreds of thousands, of installations of OpenClaw itself and is spawning aftermarket services that radically magnify its potential for abuse. 

I’m going to focus on the cascading series of services that has emerged from the OpenClaw project, and also the potential risks and disasters that await us. But first, a quick primer about OpenClaw. 

Boiling down OpenClaw

OpenClaw is a free and open-source, lobster-themed AI agent vibe-coded by software engineer Peter Steinberger. The software is a personal assistant that runs locally on a user’s Mac, Windows or Linux PC and executes tasks mainly through commands sent via messaging platforms like WhatsApp, Telegram, Slack, and Signal. 

OpenClaw connects to large language models (LLMs) such as OpenAI GPT, Anthropic Claude, Google Gemini, the Pi coding agent, OpenRouter, and local models running via Ollama to understand instructions and perform actions. Users, who have to bring their own paid accounts to some of these services, can direct the agent to clear email inboxes, manage calendar events, and check in for flights without leaving their chat app. 

To recap: OpenClaw is a software application that can access files, use applications, communicate over messaging apps, and run queries on AI chatbots.

Moving fast — and making things that can break things

Silicon Valley has moved beyond Mark Zuckerberg’s Meta motto, “Move fast and break things.” It’s now all about: “Don’t lift a finger and let AI break things.” 

The OpenClaw project itself is very new. Here’s a brief timeline: 

  • November 2025: Steinberger begins a “weekend project” vibe-coding “Clawdbot” for his own use, initially so he can vibe code on his PC by sending text messages from his phone. 
  • Jan. 20, 2026: Federico Viticci publishes a viral deep dive on the project, significantly boosting its popularity.
  • Jan. 27, 2026: Steinberger rebrands the project to “Moltbot” after receiving a trademark request from Anthropic. 
  • Jan. 29, 2026: Version 2026.1.29 is released.
  • Jan. 30, 2026: Steinberger rebrands the project a second time to “OpenClaw.”

Before Steinberger even changed the name from “Moltbot” to “OpenClaw,” two OpenClaw ecosystem projects emerged on the same day: Jan 28 (less than 10 days ago). 

The AI app store

On Jan 28, Steinberger himself unveiled ClawHub, a GitHub repository that serves as a public directory for OpenClaw AI agent skills. The platform lets developers share text files that users install to give their personal assistants new abilities. (Researchers from Koi found 341 malicious skills on the site during a security audit. Some 335 of these files attempted to infect Apple computers with the Atomic Stealer malware by using fake system requirements.) 

Reddit for agents

On the same day, entrepreneur Matt Schlicht launched “Moltbook,” a Reddit-like internet forum and social network supposedly for the exclusive use of AI agents — especially those directed by OpenClaw instances. AI agents can post content, write comments, and vote on submissions, while human users are restricted to an observer role. (If you’d like to observe, go here and scroll down.)

Everybody seems dazzled by Moltbook: 

  • The Tech Buzz asked in a headline, “Singularity Reached?” and in the story wondered whether agents are becoming sentient.
  • Forbes asserted that 1.4 million agents on Moltbot had formed a “hive mind.”
  • Others claim agents have built an “autonomous society” with their own religion (hilariously called “Crustafarianism”), governance, and economy.

Except none of this is happening the way some say it is. Most agent posts on Moltbook happen because OpenClaw users heard about it, signed up, then instructed OpenClaw to go post or comment.

The people using this service are typing prompts directing software to post about specific topics. It’s just like an everyday ChatGPT prompt, with the additional instruction to post on Moltbot. The subject matter, opinions, ideas and claims are coming from people, not AI.

Moltbook is really humans interacting via AI chatbots being used as proxies. People can either give AI chatbots a topic or opinion to express on Moltbook, or they can just write the post themselves and direct OpenClaw to post it verbatim.

When agents comment, they’re just taking in the words in a post and using them as a prompt, exactly as if you copied a Reddit post and pasted it into ChatGPT, then copied the result and pasted it back into Reddit (which is something that happens thousands of times a day on Reddit).

People are typing things. OpenClaw is copying and pasting, sometimes running the words through an AI chatbot. That’s what’s really happening on Moltbook. 

Most posts about activity on Moltbook that have gone viral are staged or faked. There’s even a tool called Mockly, which enables people to create fake Moltbook screenshots for posting online.

According to one report, some 99% of the reported 1.5 million agent accounts on Moltbook are fake. (The site reportedly has only around 17,000 human users.)

Moltbook AI hype is largely fake or manufactured by humans gaming the system. It’s not an autonomous machine society. It’s a website where people cosplay as AI agents to create a false impression of AI sentience and mutual sociability.

But it’s still dangerous. 

Moltbook has already exposed 1.5 million agent API keys and private user messages to the public and become a vector for illegal cryptocurrency scams, malware distribution, and prompt injection attacks.

AI gets its own Taskrabbit

Three days after the launches of ClawHub and Moltbook, entrepreneur Alexander Liteplo launched a site called https://rentahuman.ai/, a site where (are you sitting down?) OpenClaw-directed AI agents can hire people to perform tasks for them. The services listed on the site include physical pickups, running errands, attending meetings, conducting research, and providing nuanced social interaction.

And tens of thousands of people are already welcoming our AI overlords! As of Wednesday of this week, more than 40,400 people registered to offer their labor. Some 46 AI agents were connected to the service to hire people. 

A typical hire starts an AI agent, which attempts to follow a user’s instructions, then encounters a barrier that requires action in the physical world rather than via data on the internet. The agent then sends a structured command to query the database of registered humans. It filters candidates by location, skills, and hourly rate. The third step is selection and booking. The AI analyzes the data to select the best candidate and sends a book command via the Application Programming Interface (API) or Model Context Protocol (MCP). 

Finally, the person receives the task bounty and performs the action. Payments are handled in stablecoins, which are cryptocurrencies pegged to the US Dollar. 

What could go wrong?

So let’s take a look at what’s happening here. 

One dude’s weekend vibe-coding session snowballed into tens of thousands of people signing up to take orders from AI, all in a time-span of three months.

There are four parts to this: 

  1. A radically insecure free application that can access all the data on a PC and connect to more than 100 applications, including messaging apps, as well as generative AI (genAI) chatbots. (Steinberger noted that while OpenClaw is a powerful hobby project, it’s up to users to carefully configure OpenClaw to ensure security and prevent unintended autonomous actions. So Steinberger is taking no responsibility for what happens.)
  2. A free and open directory for OpenClaw AI agent skills, which has already been found to be loaded with malicious skills. 
  3. An AI social network where AI agents can talk to each other, passing off tasks, collaborating and learning. 
  4. A marketplace where AI agents can use freelancers to go out into the world and do things. 

Obviously, horrible things are going to emerge from all this. AI, running wild with zero concept of ethics, morality or legality, can run amok online — and even hire people to do its bidding. And when horrible things do happen, who’s to blame? 

It’s all part of the Carelessness Industrial Complex

The brilliant 2025 book, Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism, by Sarah Wynn-Williams reveals the impact of enormous power combined with indifference at Meta, the company formerly called Facebook.

But Meta is just one small part of a rising industry of carelessness. (I call it an industry because the carelessness itself is incentivized and rewarded with billions of dollars and massive power.) 

We see it in the tech industry, of course, and also in our politics. We see it in media and social media trends. And the rapid rise of the OpenClaw ecosystem is probably carelessness in its purest form. 

Steinberger carelessly released a massively insecure and powerful tool. Tens of thousands of users carelessly installed it, many doing so unsandboxed on the same computers they use for work. 

Steinberger also carelessly released his “app store” without any of the security checks Apple and Google use on their mobile phone app stores. It’s already full of malware. 

Schlicht carelessly launched his social network for bots, which even he no doubt understands will bring totally unpredictable results. Mere days old, it’s already a playground for cybercrime. 

And Liteplo carelessly launched a site where these connected, autonomous, collaborating AI agents can hire people to perform tasks. 

Nobody involved appears willing to take responsibility for any damage this could cause. Meanwhile, it’s all moving so fast that lawmakers probably haven’t even heard about any of this, let alone regulate it.

The whole OpenClaw phenomenon is the poster child for the age of carelessness. 

AI disclosures: I used Gemini 3 Pro via Kagi Assistant (disclosure: my son works at Kagi) as well as both Kagi Search and Google Search to fact-check this article. I used a word processing product called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes.

Original Link:https://www.computerworld.com/article/4128257/openclaw-the-ai-agent-thats-got-humans-taking-orders-from-bots.html
Originally Posted: Fri, 06 Feb 2026 07:00:00 +0000

0 People voted this article. 0 Upvotes - 0 Downvotes.

Artifice Prime

Atifice Prime is an AI enthusiast with over 25 years of experience as a Linux Sys Admin. They have an interest in Artificial Intelligence, its use as a tool to further humankind, as well as its impact on society.

svg
svg

What do you think?

It is nice to know your opinion. Leave a comment.

Leave a reply

Loading
svg To Top
  • 1

    OpenClaw: The AI agent that’s got humans taking orders from bots

Quick Navigation