Every major AI lab ships its own chat interface. OpenAI has ChatGPT. Anthropic has Claude. Google has Gemini. And every one of them asks you to leave the apps you already live in — your WhatsApp threads, your Slack workspace, your Telegram groups — to go talk to AI in a separate window.
Peter Steinberger, an Austrian developer, thought that was backwards. In November 2025, he published a small project that let you wire up an LLM to your existing messaging apps. Four months and one trademark dispute later, that project — now called OpenClaw — has 247,000 GitHub stars and a growing argument that the future of AI isn't a new app. It's a daemon running on your machine.
The Idea: AI Should Come to You
OpenClaw is a self-hosted gateway. You install it on your machine (or a server), point it at an LLM provider — Claude, GPT, DeepSeek, Gemini, or a local model through Ollama — and connect your messaging channels. From that point on, your Telegram bot, your WhatsApp chat, your Discord server, and your Slack workspace all become interfaces to the same AI agent.
The project currently supports over 20 channel integrations. The major ones: WhatsApp (via WhatsApp Web bridge), Telegram (Bot API), Slack, Discord, Signal, iMessage, Microsoft Teams, Google Chat, Matrix, IRC, LINE, and Mattermost. On macOS and iOS, it adds voice interaction with a push-to-talk overlay. On Android, it runs as a background node with camera and screen capture access.
This isn't a chatbot wrapper. OpenClaw's agent can browse the web, read and write files, execute shell commands, manage your calendar, send emails, and chain multi-step workflows together. It's closer to a local operating system for AI than a messaging relay.
Under the Hood: One Gateway, Many Channels
The architecture is deliberately simple — and that simplicity is the key technical insight.
At the center sits a long-lived Gateway process. It runs as a daemon (launchd on macOS, systemd on Linux), maintains persistent WebSocket connections to all your messaging channels, and exposes a control plane at ws://127.0.0.1:18789. Every message from every channel flows into this single process.
When a message arrives, the Gateway kicks off a serialized agent loop: intake → context assembly → model inference → tool execution → streaming reply → session persistence. The critical design choice is serialization per session — one run at a time, no parallel execution within a conversation. This eliminates an entire class of state corruption bugs that plague more ambitious distributed architectures.
Tools are first-class citizens. Each integration ships as a plugin with a JSON-schema definition and an execution function. Adding a new capability means dropping in a plugin — the core never needs modification. This is why OpenClaw could scale from a weekend hack to 20+ integrations without the codebase becoming unmanageable.
Memory follows the same philosophy of inspectability. Daily conversation logs go to memory/YYYY-MM-DD.md as append-only files. Long-term memory lives in a curated MEMORY.md loaded at session start. Identity and personality are defined in plain Markdown files: USER.md, SOUL.md, IDENTITY.md. Everything is a file you can read, edit, version-control with Git, and understand without special tooling.
The Trademark Drama
The project's path to its current name tells a story about the AI industry's territorial instincts. Steinberger originally named it Clawdbot — a playful nod to Claude, the Anthropic model it initially supported. Anthropic's legal team sent a trademark complaint. On January 27, 2026, the project renamed to Moltbot. Three days later, it settled on OpenClaw.
The rename barely registered. Stars kept climbing. By mid-February, OpenClaw had crossed 216,000 stars. The controversy, if anything, amplified visibility — a pattern the open-source community has seen before.
Getting It Running
Installation requires Node 24 (or Node 22.16+). The actual setup:
npm install -g openclaw@latest
openclaw onboard --install-daemon
That's it. The onboard command installs the Gateway as a user-level service and walks you through connecting your first channel and LLM provider. The dashboard opens at http://127.0.0.1:18789/.
For development or tinkering from source:
git clone https://github.com/openclaw/openclaw.git
cd openclaw
pnpm install && pnpm ui:build && pnpm build
pnpm openclaw onboard --install-daemon
Channel security uses a pairing model by default: unknown senders get a code, you approve with openclaw pairing approve <channel> <code>. It's not enterprise SSO, but it prevents random strangers from talking to your agent.
You can switch between release channels — stable, beta, or dev — with openclaw update --channel <name>, and run openclaw doctor after updates to verify health and apply migrations.
Why 247K Stars in Four Months
GitHub star counts are a vanity metric until they aren't. OpenClaw's growth reflects three structural advantages that most AI products don't have:
Zero new interfaces to learn. Every other AI tool asks you to adopt its UI. OpenClaw uses the UIs you've already internalized — your messaging apps. The activation energy is near zero: install the daemon, scan a QR code for WhatsApp, done.
Your hardware, your data. LLM API calls go directly from your machine to the provider. OpenClaw is a routing layer, not a proxy service. There's no third-party server seeing your conversations, no account to create with a startup that might pivot or shut down. For anyone under compliance constraints — or anyone who's watched enough AI startups fold — this matters.
Model freedom. Claude today, GPT tomorrow, Ollama offline on a flight. Switch per conversation or per skill. The gateway pattern means your agent's capabilities aren't locked to a single provider's roadmap.
What's Still Missing
OpenClaw isn't finished. Group chat behavior is still rough — the agent can be overeager in multi-person threads. Rate limiting across channels needs work. The skills ecosystem is growing but young, with community contributions varying in quality.
The bigger open question is sustainability. MIT license, no company behind it, no revenue model. Steinberger and a growing contributor base are building in the open, but the infrastructure cost of maintaining 20+ channel integrations doesn't scale on goodwill alone. Whether OpenClaw follows the path of successful community-driven projects like Ollama or fragments under its own growth remains to be seen.
The Bottom Line
OpenClaw's bet is that the best AI interface is no interface — or rather, every interface you already use. In a market crowded with standalone AI apps competing for your attention, a daemon that quietly runs on your machine and shows up wherever you already are is a genuinely different approach. That 247,000 developers agree says something about where personal AI is heading: not another app to open, but an invisible layer woven into the tools you never left.

