She has read a dozen “AI in business” articles this year. She has watched two AI conference talks back to back on a Friday afternoon. She has had a credible vendor through the door pitching an org-wide rollout, and she has nodded along, because the slides were good. It is now Monday morning, and what she actually does between 7am and 9am has not changed in any way at all.
That gap is the post. There are two AI conversations going on in 2026, and many owner-operators have only been hearing the first.
What is the second AI conversation?
The second AI conversation is about AI on your own desk, not AI in your business. It is one founder, one week, one set of recurring tasks, no team sign-off, no procurement gate, no governance committee. The first conversation, the one the conferences run on, is about deploying AI across customer service, marketing, finance, operations. Both matter. They are not the same project, and for owner-operators the second one moves first.
The distinction is not academic. The McKinsey 2025 State of AI work tracks a widening gap between individual leader experimentation and formal organisational deployment. The Stanford HAI 2025 AI Index points the same way. Senior operators are running personal AI stacks well ahead of their organisations, and the public disclosures show it. Wade Foster at Zapier laid his stack out on Lenny’s Newsletter. Tobi Lütke at Shopify wrote the “AI is now a fundamental expectation” memo to his own company in April 2025. Simon Willison maintains a working blog of what he actually uses. None of that is a deployment programme. It is personal practice.
For UK owner-operators of services SMEs in the £1m to £10m band, the implication is direct. You are closer to Foster’s situation than to a FTSE 250 change office with an AI workstream. Your first month of useful AI work happens on your desk, in your week, on tasks you already own.
Why does personal practice usually move first?
Because there is nothing in the way. Personal AI practice has no vendor selection, no data classification review, no security sign-off, no change-management plan, no training rollout. You open the tool, put in the task, keep what works, discard what does not. Payback is measured in days, not quarters. The cost of starting, for an owner with a clear week and a card, is roughly £20 to £200 a month and an afternoon.
The contrast with organisational deployment is not a value judgement. Org-level AI work is slower because it has to be. Vendor selection takes weeks. Data governance takes longer. Security review for a tool that touches client data is a serious piece of work. Change management on a 25-person team where five did not ask for the tool is a serious piece of work too. None of that is wrong; it is what makes organisational deployment safe to live with.
The trap is treating the slower work as the only work. Founders who wait for the org programme to be ready before changing anything on their own desk wait a year and lose the personal-practice month they would have spent learning what AI is actually good at on their own kind of task. Ethan Mollick’s “Using AI right now” guide makes the case directly: personal experimentation is a prerequisite for sound organisational AI decisions, not an indulgence to do after them.
What does the EAD-Do framework actually mean?
EAD-Do is the spine of this category: Eliminate, Automate, Delegate, Do. It is a recast of Rory Vaden’s Focus Funnel from his 2015 TED talk and book “Procrastinate on Purpose”. Vaden’s original five steps were Eliminate, Automate, Delegate, Procrastinate, Concentrate. EAD-Do drops Procrastinate (rarely the right call with AI sitting next to you) and renames Concentrate as “Do” so the step covers AI-assisted deep work as well as unassisted focus.
The order matters because AI sits credibly in three of the four steps but not the first. Eliminate is a human-only decision. Should this task exist at all? Should this meeting exist at all? Should this report exist at all? Adding AI to a task that should have been cut bakes the task into the firm at lower cost, which is the wrong outcome. Automate is where AI starts to earn its place: recurring, rule-shaped tasks where the model can produce a first draft you check rather than build from scratch. Delegate is where AI takes on a job a human used to own (with you holding the review). Do is where you are still the operator, but with a model alongside you for the bits that benefit from it.
Each quadrant becomes a working surface. Each surface has its own posts in this cluster. The framework explainer, The EAD-Do framework recast for AI, is the natural next read.
What does personal AI practice quietly deliver for the founder?
It buys back hours, reduces dependency on your own attention, and lets the work scale without you scaling. Those three lines are the Founder Freedom Programme’s pitch in compressed form, and personal AI practice delivers them as a side effect of being used well. Not because AI is magic. Because the four EAD-Do steps, applied to a founder’s actual week, surface a meaningful share of tasks that should be eliminated, automated, or handed across.
The mechanism is unglamorous. A founder who used to spend 40 minutes a day on inbox triage runs an inbox-AI tool and gets it down to 10. A founder who used to spend two hours preparing for the Monday team meeting runs a meeting-prep skill and gets it down to 20 minutes. A founder who used to spend Sunday evenings writing the weekly board-style update drafts it Friday afternoon with a model and reviews it Sunday in 15 minutes. None of those wins is dramatic on its own. Stack them across a week and you have five hours back. Stack them across a quarter and you have a different relationship with your own diary.
The honest counterpart matters too. Cal Newport has written sharply about why AI has not yet made work easier for many people, and he is right that tool fragmentation and “do more, faster” reflexes eat the gains for a meaningful share of professionals. Personal AI practice that does not start with Eliminate falls into exactly this trap. The hours come back when the surface gets smaller, not when the surface gets faster.
When should you ignore this and stay with the org-level conversation?
When you are no longer the bottleneck and the org programme is genuinely live. If you have a 200-person services firm with a working AI council, a vendor under contract, an internal LLM proxy, and a quarterly governance review, your personal practice has already happened, the team’s practice is the live question, and a cluster called “AI for your own work” is mostly noise to you.
For owner-operators of £1m to £10m services SMEs in 2026, that situation is the exception. The typical reader of this cluster is one to three people away from the work, with a calendar they own, a recurring task list they own, and an org-AI rollout that is somewhere between “not started” and “exploratory”. For that reader, the second conversation is the one with the highest-value first month inside it. The cluster is built for that reader. The next post in line is the EAD-Do framework explainer; after that, the four quadrants take a post each.



