A founder of a 27-person services firm is on his sofa on a Saturday morning, unwell, contemplating whether to log in for an hour to keep the AI-driven proposals workflow running. Six months ago he built the stack himself: a custom GPT trained on the firm’s voice and past proposals, a Zapier set that pushes finalised drafts into the CRM, a weekly review process he runs himself. It saved him roughly 12 hours a week for the first three months.
Today, on a flu-ridden Saturday, he is realising that nobody else in the firm knows how the stack actually works. If he sits this out, the proposals queue stalls. He looks at the kettle. He looks at the laptop. He realises he has built a tool that, like every other tool he has tried to delegate over the last decade, only he can use.
Naming the pattern
AI-as-new-dependency is the cluster failure that crosses every AI implementation pattern in this catalogue. The founder builds an AI stack sophisticated enough to absorb meaningful work, then becomes the only person in the firm who can operate it. The implementation looks productive, but the structural pattern (founder as bottleneck) is unchanged or worse, because now the workflow depends on the AI plus the founder’s ability to interpret it.
Naming the pattern matters because most founders do not see it forming. The first three months of the new AI workflow look like a clean win, and the dependency creeps in afterwards.
Why does the pattern emerge so reliably?
The founder is usually the most AI-fluent person in the firm. They build for themselves because that is the easiest path. Their optimisation target is their own output; the team’s ability to maintain the system is an afterthought. They underestimate the documentation and onboarding work because they hold the system in their head and it feels obvious. Six months in, the founder is the single point of failure for the stack.
The pattern is structurally similar to the founder building the firm itself: built to fit the founder’s brain, hard to hand over later, dependent on the person who created it.
The first failure case: founder unavailable
Holiday. Illness. Crisis elsewhere. The AI workflow stalls because nobody else can interpret the outputs, troubleshoot the failures, or update the prompts when something changes upstream. The team have no permission to touch the stack and no understanding of how it works, so they wait.
Two weeks of holiday becomes two weeks of a quiet operations problem. The founder returns to a backlog of issues the AI was supposed to be handling. The team feels relieved that the founder is back; the founder feels betrayed by their own tooling. The dependency has revealed itself.
The second failure case: complexity exceeds maintainability
Multiple AI systems, custom fine-tuned models, integration with internal databases, specialised guardrails. When something breaks (and it will), only the founder can diagnose. When the system needs updating (and it will), only the founder can do it. Even the founder can lose track of what they built six months in.
This is the form of the trap that happens when the founder is technically capable. The founder who is not technical builds something simpler that is also more brittle; the founder who is technical builds something more sophisticated that is more brittle in different ways. Either way, the firm depends on one person to keep the AI stack working.
The five conditions that separate leverage from theatre
First, the AI systems are documented. What each system does, what it is trained on, what its constraints are, where it fails. The documentation lets someone else maintain the system if the founder is unavailable. Second, clear input/output boundaries. A custom GPT takes clear inputs (data, constraints, parameters) and produces clear outputs (recommendation, draft, analysis). If only the founder can interpret the output, the system is not truly delegated.
Third, a backup operator. Someone on the leadership team, or a consultant on retainer, who knows the stack well enough to maintain it for a week if the founder cannot. They do not need to be an AI expert, but they need to know how to diagnose common issues, when to trust outputs, and when to escalate. Fourth, regular degradation testing. Once a month, someone other than the founder runs a sample of AI outputs and validates them. This catches drift and trains the team member on how the AI thinks. Fifth, threshold-based decision authority. The AI approves under £2,000, anything above goes to a human. Drafts proposals; the founder or a senior person reviews before sending. The AI magnifies human judgement; it is not a decision-maker on its own.
The democratisation principle
OpenAI’s 2025 enterprise adoption data finds that frontier users send 6x more messages to ChatGPT and engage more intensively with advanced capabilities than median employees. The same data shows that organisations where AI is tightly concentrated with a small number of “AI experts” do not see enterprise-level ROI. Organisations that democratise AI access (making it available to most team members, with guardrails rather than restrictions) see the highest productivity gains.
For SMEs at smaller scale, the same principle holds. Distributed AI literacy beats concentrated expertise. The founder building a sophisticated AI stack alone produces a sophisticated single point of failure; the founder building a simpler AI stack the leadership team can maintain produces actual leverage.
The audit you can run today
Five questions. If I were unavailable for two weeks, would the AI stack continue to produce outputs the team can use? If the answer is no, which specific systems break? Who else in the firm can diagnose them? When was the last time someone other than me reviewed the AI outputs? What decision authority does each AI system have, and where is the human override?
Honest answers expose the dependency. The fix is iterative. Document one system this week. Train one backup operator next month. Set thresholds and degradation reviews on the calendar. Resist the urge to keep building before this housekeeping is done. The leverage you wanted from the AI stack is on the other side of these conditions; without them, the elegant new tool is the elegant new bottleneck.
What to do this week
Pick the AI system in your firm that you rely on most heavily. Document it: what it does, what it is trained on, where it fails. Identify one person other than yourself who could maintain it for a week. Schedule a 30-minute walkthrough where they ask questions and you answer. Set a calendar reminder to do a degradation review in 30 days, with that person doing the reviewing.
If the answer to “who else could maintain this” is “nobody,” that is the diagnosis. The next move is finding that person and bringing them up to speed before extending the AI stack any further.
If you want a second pair of eyes on which AI systems in your firm have become founder-dependent without you noticing, book a conversation.



