I have not installed OpenClaw. I want to. The idea is exactly right, and I cannot stop reading the repo. But I have not run it, and the reason I have not run it is the same reason I worry about a lot of what is happening in AI tooling right now: very smart people are moving very fast without thinking about the first-order consequences of what they are plugging into their workflows.
OpenClaw (formerly ClawdBot, formerly MoltBot) is an open-source AI agent that lives in Signal, Telegram, Discord, and your terminal. The pitch is simple: instead of switching to a browser or an IDE to interact with an AI, you interact with it through the messaging apps you already have open. The execution is surprisingly good for a project that has been renamed three times.
The naming history tells you something about the culture. It started as ClawdBot (a pun on Claude that apparently made Anthropic's lawyers nervous). Renamed to MoltBot (because lobsters molt, get it?). Then renamed again to OpenClaw when the project went fully open-source and the maintainers decided they wanted a name that would not require another rename. Three names in under a year. The code moved faster than the branding, which is exactly the right priority order for an open-source project.
The concept is genuinely compelling. Ask questions about your codebase from Signal while walking the dog. Run a quick data check from Telegram at dinner. The barrier to "let me look into that" drops from "open laptop, navigate to project, run query" to "type a message in the app that is already on your phone." That friction reduction matters more than any single feature. The practice owners I serve through Dentplicity do roughly 50% of their DentGPT work between 7 PM and midnight. They are not in "work" mode. They are in "thinking about my practice" mode. Tools should meet people there. OpenClaw gets this right conceptually.
But here is where I get concerned. Every week there is another announcement that feels like an apocryphal shift. A new agent framework. A new tool that plugs AI into another surface area of your life. And every week, the reaction from builders I respect is immediate adoption with almost no discussion of security, privacy, or the downstream effects of routing your codebase through messaging infrastructure you do not control. OpenClaw processes queries through whichever AI backend you configure (OpenAI, Anthropic, local models). Your messages route through Signal or Telegram's infrastructure first, then to the AI. For casual queries about public codebases, fine. But how many people are actually limiting themselves to casual queries about public codebases? The convenience of messaging-based AI comes with the privacy trade-offs of messaging-based anything, and I do not see enough people thinking carefully about that tradeoff before they pipe their proprietary code through it.
The Steinberger-to-OpenAI move is worth mentioning because it reshapes the competitive landscape for these tools. When a core contributor to an open-source project joins a major AI company, the community forks or it fades. OpenClaw seems to be choosing the fork path: the project is more active post-departure, with new contributors picking up the workstreams and the governance becoming more distributed. That is the healthy response.
My own workflow is simpler than people assume. Claude Code for serious development work. OpenAI Codex when I want a different perspective or its strengths fit the task better. That is basically it. I messed with Windsurf back in January and February of 2025, tried Augment Code and CLINE around the same period. Those experiments were useful for understanding the landscape, but I settled into Claude Code as my primary tool and have not felt the pull to add more surface area. Every new tool is another trust decision, another set of credentials, another attack surface. I would rather go deep with two tools I understand than spread across six I do not.
I am not saying OpenClaw is insecure. I am saying the pattern of adoption I am watching, where smart people install something the day it ships and plug it into their development environment without asking hard questions first, concerns me. We have been waiting for AI agents that live outside the IDE, and now that they are arriving, the excitement is outrunning the diligence. The lobster is fascinating. I will keep watching it. But I am not letting it into my terminal until I understand exactly what it is doing with everything it sees. For now, the only lobster I trust without questions is the mac and cheese at Mastro's.