A founder I was talking with last month had read the latest McKinsey “Superagency in the Workplace” report on the train down to London. By the time she arrived she felt vaguely behind. Her firm runs at £4m, she has a small leadership team, and her actual AI use that week had been three things: rewriting a board update, summarising a 40-page supplier proposal, and working through a difficult conversation with a head of sales by typing it out and asking Claude what she was missing.
She wanted to know whether that was enough, or whether the panel data meant she had skipped something. The honest answer is the post that follows. The list of what founders genuinely use AI for is shorter than the marketing, less impressive than the conference panels, and almost identical from one founder to the next. If she is doing two of the five things on it, she is not behind. She is exactly where the median founder is.
What does the panel data actually claim founders are doing with AI?
The panel data, drawn from McKinsey’s “Superagency in the Workplace”, Bain’s “Generative AI Uptake Is Unprecedented”, and BCG’s “AI at Work: Momentum Builds But Gaps Remain”, paints a confident picture. Around 65 to 80 percent of executives report using generative AI personally. The framing leans heavily toward strategic reinvention, competitive advantage, and breakthrough use cases. Read on a train, it is hard not to feel a step behind the curve.
Two things are worth holding alongside the headline number. First, Bain’s own data notes the gap: roughly 65 percent personal use against approximately 15 percent of organisations at scaled production. That is a curiosity-and-experimentation number, not an embedded-practice number. Second, the survey instruments ask whether you have used a tool, not how often or what for. The bar to count as a user is low.
Why does that picture leave so many founders feeling behind?
It leaves founders feeling behind because the panel data describes the upper end of intent, not the middle of practice, and the gap between the two is large. Cal Newport calls this the productivity paradox. The inputs are everywhere, the outputs are not yet visible at the firm level. The independent counterweight is the NBER w34836 study by Bloom and colleagues, surveying around 6,000 CEOs and CFOs across 33 countries.
Their headline finding is sobering. Nearly 90 percent of those CEOs and CFOs reported no productivity or employment impact from AI, despite roughly 67 percent personal use. The Register’s coverage put it well: thousands of executives are struggling to find the productivity boom they keep being promised. The reading is straightforward. AI works at the task level, the gap between “I use it” and “it has changed how my firm runs” is wider than any vendor deck shows. If you are reading the McKinsey number and feeling behind, the NBER number is the corrective.
What do founders genuinely use AI for, the honest list?
The honest list, anchored in the Anthropic Economic Index and OpenAI’s own usage disclosure, is five things. Writing and editing. Summarising long material. Sparring on a decision when there is no one else to think it through with. Light coding and spreadsheet work, from formulas to scripts to a cleaning pass on a CSV. Drafting comms, from a difficult email to a board update.
Each one is unglamorous. None of them is “innovation” in the conference-panel sense. The Anthropic Economic Index data shows that around 70 percent of conversations are augmentative rather than fully delegated, and the highest-frequency categories are writing, summarising, and answering questions. OpenAI’s “How People Are Using ChatGPT” page tells the same story from a different vendor’s vantage. The shape rhymes because the underlying jobs do. Founders reach for AI when they need a draft, a summary, a second opinion, or a small bit of code, and they reach for it frequently outside office hours, on a Sunday evening or a Tuesday at 6.30am, when there is no colleague to ask. Together those five uses account for the majority of real founder activity in the model traces.
Which uses sound impressive but rarely actually happen?
Three uses sound impressive in panel decks but rarely happen at depth in the founder’s own week. Novel strategic planning, where the founder hopes the model will surface a market move they could not see themselves. Customer insight beyond surface synthesis, where the goal is genuine pattern recognition across thousands of interactions. Hiring decisions, where the model is asked to score candidates or shape a final call.
The pattern in each is the same. The model can scaffold the work. It can frame options, draft an analysis, summarise a transcript pile, or sketch a candidate matrix. The load-bearing call still sits with the founder, and trying to delegate it produces output that looks plausible until you read it carefully. The Brynjolfsson, Li, and Raymond NBER paper on generative AI at work captures the same boundary at the empirical level: task-level gains are real and measurable, firm-level reinvention is far rarer and slower than the marketing implies. Ethan Mollick’s calibration in “Using AI Right Now” is useful here. Pick one or two real workflows and go deep. Treat the rest as scaffolding, not as decisions, and the firm-level shift will follow once two stable habits compound.
Where does the honest list leave you, and where to deepen?
It leaves you in a more useful place than the panel data does. If you are doing two of the five honest uses already, you are not behind. You are inside the median, and the next move is depth on the two you have rather than breadth across new tools. Pick whichever you reach for most often, then add a structural layer that makes the next 50 uses better than the last one.
Three concrete deepenings. If your most-used surface is writing, build a personal style guide the model reads at the start of every draft. If it is summarising, move from one-shot summaries to a weekly synthesis pass that pulls themes across all your meetings. If it is decision sparring, keep a short decisions log so you can audit what you and the model got right or wrong. The deeper read on the productivity-paradox angle sits in why AI feels like it isn’t for you. The framework that ties the five honest uses to where they belong in your week sits in the EAD-Do framework recast for AI, and the wider cluster opens with AI for your own work.
If you would like a second pair of eyes on which two of the five you should deepen first, book a conversation.



