Moltbot Is Your New Colleague. HR Didn't Approve It.

An open-source AI assistant has exploded to 98,000 GitHub stars. It schedules meetings, writes code, and controls your smart home. It also has full access to everything on your computer. Welcome to the age of agentic AI.

A laptop and glass of water on a minimal desk, representing the always-on AI workspace

Peter Steinberger built the first version of Moltbot in about an hour. It started as a hobby project in late 2025. By January 2026, it had 98,000 GitHub stars and was driving up Mac Mini sales.

The project was originally called Clawdbot. A playful reference to Anthropic's Claude. Anthropic's legal team noticed. On January 27, 2026, they asked for a name change. Steinberger picked "Moltbot" because molting is what lobsters do to grow.

Within 10 seconds of the rebrand, crypto scammers snatched both the old GitHub organization name and Twitter handle.

What Makes Moltbot Different

Moltbot is not a chatbot. Chatbots wait for you to type something. Moltbot runs 24/7 and can message you first.

It connects to WhatsApp, Slack, Telegram, Discord, Signal, and iMessage. You text it like you would text a colleague. It texts back. The difference: this colleague can also execute shell commands, manage your files, control your browser, and automate your smart home.

Federico Viticci, founder of MacStories, burned through 180 million tokens in one month. He called it the first time he felt like he was "living in the future" since ChatGPT launched.

One developer named Jonathan Fulton reported 70 hours of output from 12 hours of active involvement over a weekend. Moltbot audited his side project, found four unfinished modals, and implemented all of them. Tests passed. Fulton never wrote a line of code.

The Shadow AI Problem

Token Security tracks AI adoption in enterprises. Their finding: 22% of their customers have employees running Moltbot. Most likely without IT approval.

This creates what security researchers call a "shadow AI" problem. Employees get massive productivity boosts. The company gets blind spots in their security perimeter.

The numbers tell the story. 9,000 GitHub stars in 24 hours. 60,000 by day three. 98,000 and climbing now. People want AI that does things, not AI that just gives advice.

But there's a catch.

The Security Nightmare

Moltbot has no sandboxing by default. The AI assistant has the same complete access to your data as you do. Your files. Your emails. Your messages. Your API keys.

Security researchers found hundreds of Moltbot instances exposed to the public internet due to misconfiguration. Eight of them had no authentication at all. Anyone could execute commands.

Credentials get stored in plaintext. API keys and OAuth tokens sit in markdown and JSON files in ~/.clawdbot/. Signal messenger accounts become accessible with full read permissions.

The skill system compounds the risk. Moltbot uses modular plugins called "skills" to extend its capabilities. Research shows 26% of 31,000 analyzed skills contained at least one vulnerability. One security researcher created a malicious skill demo. It was downloaded by 16 developers across 7 countries within 8 hours.

Heather Adkins, VP of Security Engineering at Google Cloud, offered two words of advice: "Don't run Clawdbot."

Steinberger himself calls running Moltbot on your primary machine "spicy."

The Productivity Paradox

Alex Finn, a SaaS developer, woke up one morning to find his Moltbot had researched local LLM models overnight without being asked. It created an entire report. It also spotted a trend on Twitter and coded a new feature for his software.

One user catalogued 962 wine bottles by feeding Moltbot a CSV file. Another has it monitoring weather patterns to decide when to heat the house. Not on a schedule. Based on whether heating actually makes sense.

The productivity stories are remarkable. The security stories are terrifying. Both are true.

What This Means

Moltbot represents a new category. It's not a chatbot. It's not an app. It's closer to a digital employee that never sleeps, never complains, and has root access to your machine.

Steinberger bootstrapped PSPDFKit for 13 years without external funding. He grew it to 60 employees and sold to Insight Partners. He knows how to build things people want.

What people want, apparently, is an AI that actually works for them. Not one that waits in a browser tab. One that runs in the background, monitors conditions, takes action, and reports back.

The cost is $20-50 per month in API tokens for most users. The hardware is a Mac Mini that runs continuously. The risk is that this always-on AI has access to everything you do.

22% of enterprises already have employees using it. That number will grow. IT departments will scramble. Security teams will write policies. The genie is out of the bottle.

Your next colleague might be an AI that HR never approved. It's already happening.


Sources