Chat-First Interfaces Are a Dead End

JAN 28 26

I built a chat interface for dental practices. It worked. And then I watched practices stop using it for exactly the tasks it was designed for.

DentGPT started as a conversational tool: type what you need, get a response. Draft a review reply. Generate a social post. Write patient outreach. The first month of usage was enthusiastic. By month three, the pattern was clear. Practices used chat for simple, one-shot tasks (reply to this review, rewrite this paragraph) and abandoned it for anything that required multiple steps or decisions.

The problem isn't the AI. The problem is the interface metaphor. Chat implies conversation, and conversation implies that the human stays engaged for every turn. For a practice owner squeezing in marketing work between patients at 9 PM, "stay engaged for every turn" is exactly wrong. They want to delegate, not converse. Set a direction, walk away, come back to something finished. Chat can't do that.

Review fatigue is the specific failure mode I keep seeing. You ask the AI to draft five social posts for the week. It generates them. You review them, edit two, approve three. Next week, same thing. By week four, you're not reviewing anymore. You're approving without reading, or you've stopped using the feature entirely. The interface created a review obligation that scaled with usage. Every additional capability meant more things to review. More decisions to make. More fatigue.

The shift I'm watching (and building toward) is from conversational UI to orchestrated UI. Instead of "chat with an AI about your marketing," it's "set your marketing preferences once, approve a plan, and the system executes." The human involvement moves from per-output review to per-strategy approval. You're not editing individual posts. You're adjusting the direction, the tone, the frequency. The AI handles execution within those boundaries.

This matters for healthcare product design specifically because healthcare professionals are the most time-constrained users I've encountered. Solo-practice dentists spend 27.2 hours per week on admin. They don't have time for conversations with software. They need delegation to software.

The products that are working for our practices share a common pattern: they compress decision-making into the smallest possible surface area. Dentplicity shows explainable scorecards and a short list of actions that map to real data inputs. The practice owner spends five minutes understanding their position, picks the moves that make sense, and acts. No conversation required. The AI did the analysis, the geo-indexing, the entity resolution, the signal compression. The human makes the strategic call.

I think chat will survive for two use cases: genuine exploration (when you don't know what you want yet) and edge cases that fall outside pre-built workflows. For everything else, the future is interfaces where AI operates within human-set boundaries rather than through human-directed conversation. Less chatbot, more autopilot with a dashboard.

The irony isn't lost on me. I built DentGPT as a chat-first product and I'm now redesigning it around orchestrated workflows. The market taught me what I should have seen from the data: ~50% of DentGPT usage happens nights and weekends, when owners are tired and want to check a box, not have a conversation. Building for that reality means building for delegation, not dialogue.