You bought the Copilot licences six months ago. Your top operator has become “the AI person” by default. They are enthusiastic, they have built a few things, and last week they sent you a Notion page of “AI experiments.” You cannot tell which of them is delivering value and which is just keeping them busy.
This is the DIY trap, and it is the most common pattern I see in the first year of SME AI adoption. The subscription was cheap. The licence was cheap. The hours your senior people are putting into it are not, and the cost almost never gets counted.
Why does DIY AI look rational when it isn’t?
The visible cost of DIY AI exploration is trivial. ChatGPT Plus is £20 a month. Microsoft Copilot Pro is similar. A small team can run for £600 a year of subscriptions and nobody on the leadership team flinches. That visible cost is the only line in the budget, and so it is the only line that gets discussed when the question of AI adoption comes up at the board.
The hidden cost lives in your senior people’s calendars, where it does not show up as a line item but does show up as a missing deliverable somewhere else.
What the time actually costs at loaded rates
A senior team member spending five hours a week on AI experimentation is spending 230 to 250 hours a year of capacity. At loaded rates of £50 to £95 an hour for SME senior staff, that is £13,000 to £24,700 a year of opportunity cost per person, every year, against a subscription cost of £240. The IPSE 2024 UK day-rate survey puts management consultants at £763 a day and software developers at £575, and internal team rates land in similar bands once on-costs are included.
Two or three senior people on the same pattern scales the cost to £30,000 to £75,000 a year of capacity. None of it appears in any AI budget. All of it is real spend, because the team member is not doing the work the business hired them to do while they are experimenting with prompts.
The trap is not the time itself. Time spent learning AI is potentially valuable. The trap is the lack of structure that turns potential learning into mostly-wasted hours.
The three dynamics that turn DIY into waste
Three dynamics combine to make DIY AI experimentation expensive in capacity and thin in output. Each one is fixable on its own. Together they are the failure pattern.
No scope is the first. The experimentation drifts across whatever caught the team’s attention this week. One week it is auto-generating customer emails. The next week it is summarising meetings. The week after, it is a custom GPT for the operations team. None of it gets finished, because the next idea has already arrived.
No success metric is the second. Without a measure of what good looks like before the work starts, nobody can tell whether anything is working. The senior person experimenting feels productive, the founder asks “is it working?”, the answer is “yes, we are getting some good results”, and nobody can pull the thread further because there is no number to compare against.
No end date is the third. The experimentation runs indefinitely because there is no defined moment when someone reviews the output and decides to scale, kill, or pivot. The time accumulates quietly. Six months in, the senior person has invested several hundred hours and the business has nothing operational to show for it.
Tool sprawl and the security drift
Two further costs sit on top of the time tax. Tool sprawl is the first. Half-deployed tools accumulate as the experimentation moves on. Each one carries a small monthly cost and a small monthly maintenance burden. By month nine an SME running DIY AI typically has six to ten subscriptions on the company card, most of them sitting unused but not cancelled.
Data security drift is the second. DIY experimenters working at speed often paste proprietary inputs into public LLM tiers without realising the data residency implications. Most of the time nothing happens. Occasionally, something does, and the cost of that something dwarfs everything else in this article.
How to convert DIY into delivery
You can keep the cheap subscription and the willing senior person and turn the pattern productive with three small disciplines. Pick a single use case. Give it a success metric. Time-box it.
A single use case means one specific workflow, one team, one before-and-after measure. Not “AI for the operations team” but “drafting follow-up emails after client calls, measured against the time-to-send and the rewrite rate from the recipient.” The use case is the constraint that lets the experimentation produce learning rather than drift.
A success metric means a number you would be willing to put in front of the board. “Reduce time-to-send by 50%” or “achieve 80% acceptance rate from the operations lead before sending.” The metric tells you whether the work is working.
A time-box means four to six weeks of focused effort, then a review. At the review, the experiment either produces a deliverable that goes into operational use, or it produces a written conclusion about why this particular use case did not work. Both outcomes are valuable. Both end the drift.
When DIY is genuinely the right answer
DIY AI is the right answer in three situations. First, when the goal is education rather than capability. Letting your team play with the tools to build literacy is a fine investment, as long as you are calling it that and not pretending it is delivery. Second, when the use case is genuinely small: one workflow, one team, one specific output. Third, when you are scoping a question to take to a consultant later, and the experimentation is the diagnostic that lets you brief the consultant well.
DIY fails when it is being used as a substitute for structured delivery on a problem that needs structured delivery. The clue is usually in the founder’s question: “I am not sure if anything we have done with AI is actually working.” That sentence is the signal that the time has crossed from learning into waste.
The choice is not whether to spend on AI. The choice is whether to spend visibly on structured delivery or invisibly on unstructured time. The visible spend buys results. The invisible spend buys experience for the senior person and not much else.
If you would like to talk about how to convert DIY AI experimentation into structured delivery, book a conversation.



