Bootstrapping in the Age of AI Agents

JAN 12 26

I shipped more production code in 2025 than in my entire career before it. That's not a brag. It's a confession about how dramatically the economics of bootstrapping have shifted, and I'm still not sure I've fully internalized what it means.

When I started learning to code in 2020 with Codecademy (terminal, Git, Python), I kept a notebook of every command, every mistake, every thing that finally clicked. From 2021 to 2024 I kept going: early mornings, small projects, web stack, databases, architecture. I was competent enough to prototype, but building production software as a solo founder still felt like trying to build a house with hand tools while your neighbor had a crane.

Then the crane showed up.

In 2025 I built Dentplicity with Claude Code, Cursor, and a rotating cast of AI assistants. The real numbers: what used to take me a week of fumbling through documentation and Stack Overflow now takes an afternoon of focused prompting and review. Entity resolution across tens of thousands of dental practice records. Geo-indexing to real catchments. Scoring algorithms that compress thousands of signals into a handful of moves. I'm not claiming the code is perfect. I'm claiming it exists, it works, and it shipped to customers across the U.S. and Canada. As a one-person team.

The economics changed in three specific ways. First, the cost of the first version dropped by roughly 80%. Not because the tools write perfect code, but because they eliminate the hours spent on boilerplate, configuration, and "how do I do X in framework Y" research. The hard thinking (what to build, why, for whom) still takes the same amount of time. The translation from decision to working code got dramatically faster.

Second, the iteration speed compressed. When a practice owner told me the competitive intelligence view was confusing, I could rework the entire component in a day instead of a sprint. When 777 survey responses revealed that practices cared more about actionable moves than raw data, I could restructure the product around that insight within a week. Bootstrapping has always been about tight feedback loops. AI tools made those loops tighter than I thought possible.

Third, and this is the one I'm still processing: the ceiling for what a solo technical founder can maintain went up. I'm running data pipelines that ingest proprietary feeds, open data (DEA registrations, state boards, NPI directories, Census/BLS, map datasets, reviews, social sentiment), and web signals. I'm maintaining a product (DentGPT) that practices use nights and weekends to draft posts, emails, review replies, and landing copy tuned to their neighborhood context. A lot of that DentGPT work happens on owners' schedules, not mine. Two years ago, maintaining all of this would have required at least a two-person engineering team.

There are real limits. AI coding tools are excellent at generating code and terrible at understanding your business context. They'll happily build the wrong thing faster than you can realize you asked for the wrong thing. The 777 surveys, the 100+ Zooms analyzed through Google Notebook LM, the months of cold outreach through Instantly.ai, those customer development hours haven't gotten cheaper. AI accelerates the building. It doesn't accelerate the understanding.

I also found that the tools work best when you already know enough to evaluate what they produce. My years of slow, manual learning (the notebook, the small projects, the stumbles) aren't obsolete. They're the foundation that makes AI-assisted development productive instead of dangerous. A founder who can't read the code the AI produces is delegating without oversight. That's not bootstrapping. That's hoping.

The trade-offs I made: I bootstrapped the whole thing, which meant real constraints. No external funding runway, so every feature had to drive immediate value. I kept the scoreboard simple: time-to-first-action, percentage of suggested moves completed, whether those moves show up in reviews, referrals, direct bookings. That constraint discipline came from bootstrapping, not from AI. The AI made it possible to ship the things I decided to build. The deciding was still on me.

What a one-person team can ship in 2026 versus 2022 is genuinely different. Not marginally different. Structurally different. And the founders who figure out how to pair AI coding velocity with real customer understanding (the kind that comes from 777 surveys and walking your dog while listening to synthesized interview takeaways) are going to build things that look impossible from the outside.

I'm still keeping the notebook. Dates, commands, mistakes, what finally clicked. Some habits are worth keeping even when the tools change.