The DIY trap: why £240 of ChatGPT a year quietly costs you £30k

A team member at a kitchen table with a laptop, notebook, and coffee, working through AI experiments
TL;DR

DIY AI adoption looks cheap because the subscription is cheap. The hidden cost is the staff time disappearing into unstructured experimentation, which at senior loaded rates runs £25,000 to £50,000 a year per person.

Key takeaways

- A staff member spending 5 hours a week on AI experimentation costs £13,000 to £24,700 a year in opportunity cost at loaded rates - The visible spend (£240 to £600 a year of subscriptions) is trivial; the hidden spend is real money - Three dynamics turn DIY into waste: no scope, no success metric, no end date - Tool sprawl, fragmented learning, and data security drift compound the cost - DIY converts to delivery once you scope a single use case, set a success metric, and time-box the experiment

You bought the Copilot licences six months ago. Your top operator has become “the AI person” by default. They are enthusiastic, they have built a few things, and last week they sent you a Notion page of “AI experiments.” You cannot tell which of them is delivering value and which is just keeping them busy.

This is the DIY trap, and it is the most common pattern I see in the first year of SME AI adoption. The subscription was cheap. The licence was cheap. The hours your senior people are putting into it are not, and the cost almost never gets counted.

Why does DIY AI look rational when it isn’t?

The visible cost of DIY AI exploration is trivial. ChatGPT Plus is £20 a month. Microsoft Copilot Pro is similar. A small team can run for £600 a year of subscriptions and nobody on the leadership team flinches. That visible cost is the only line in the budget, and so it is the only line that gets discussed when the question of AI adoption comes up at the board.

The hidden cost lives in your senior people’s calendars, where it does not show up as a line item but does show up as a missing deliverable somewhere else.

What the time actually costs at loaded rates

A senior team member spending five hours a week on AI experimentation is spending 230 to 250 hours a year of capacity. At loaded rates of £50 to £95 an hour for SME senior staff, that is £13,000 to £24,700 a year of opportunity cost per person, every year, against a subscription cost of £240. The IPSE 2024 UK day-rate survey puts management consultants at £763 a day and software developers at £575, and internal team rates land in similar bands once on-costs are included.

Two or three senior people on the same pattern scales the cost to £30,000 to £75,000 a year of capacity. None of it appears in any AI budget. All of it is real spend, because the team member is not doing the work the business hired them to do while they are experimenting with prompts.

The trap is not the time itself. Time spent learning AI is potentially valuable. The trap is the lack of structure that turns potential learning into mostly-wasted hours.

The three dynamics that turn DIY into waste

Three dynamics combine to make DIY AI experimentation expensive in capacity and thin in output. Each one is fixable on its own. Together they are the failure pattern.

No scope is the first. The experimentation drifts across whatever caught the team’s attention this week. One week it is auto-generating customer emails. The next week it is summarising meetings. The week after, it is a custom GPT for the operations team. None of it gets finished, because the next idea has already arrived.

No success metric is the second. Without a measure of what good looks like before the work starts, nobody can tell whether anything is working. The senior person experimenting feels productive, the founder asks “is it working?”, the answer is “yes, we are getting some good results”, and nobody can pull the thread further because there is no number to compare against.

No end date is the third. The experimentation runs indefinitely because there is no defined moment when someone reviews the output and decides to scale, kill, or pivot. The time accumulates quietly. Six months in, the senior person has invested several hundred hours and the business has nothing operational to show for it.

Tool sprawl and the security drift

Two further costs sit on top of the time tax. Tool sprawl is the first. Half-deployed tools accumulate as the experimentation moves on. Each one carries a small monthly cost and a small monthly maintenance burden. By month nine an SME running DIY AI typically has six to ten subscriptions on the company card, most of them sitting unused but not cancelled.

Data security drift is the second. DIY experimenters working at speed often paste proprietary inputs into public LLM tiers without realising the data residency implications. Most of the time nothing happens. Occasionally, something does, and the cost of that something dwarfs everything else in this article.

How to convert DIY into delivery

You can keep the cheap subscription and the willing senior person and turn the pattern productive with three small disciplines. Pick a single use case. Give it a success metric. Time-box it.

A single use case means one specific workflow, one team, one before-and-after measure. Not “AI for the operations team” but “drafting follow-up emails after client calls, measured against the time-to-send and the rewrite rate from the recipient.” The use case is the constraint that lets the experimentation produce learning rather than drift.

A success metric means a number you would be willing to put in front of the board. “Reduce time-to-send by 50%” or “achieve 80% acceptance rate from the operations lead before sending.” The metric tells you whether the work is working.

A time-box means four to six weeks of focused effort, then a review. At the review, the experiment either produces a deliverable that goes into operational use, or it produces a written conclusion about why this particular use case did not work. Both outcomes are valuable. Both end the drift.

When DIY is genuinely the right answer

DIY AI is the right answer in three situations. First, when the goal is education rather than capability. Letting your team play with the tools to build literacy is a fine investment, as long as you are calling it that and not pretending it is delivery. Second, when the use case is genuinely small: one workflow, one team, one specific output. Third, when you are scoping a question to take to a consultant later, and the experimentation is the diagnostic that lets you brief the consultant well.

DIY fails when it is being used as a substitute for structured delivery on a problem that needs structured delivery. The clue is usually in the founder’s question: “I am not sure if anything we have done with AI is actually working.” That sentence is the signal that the time has crossed from learning into waste.

The choice is not whether to spend on AI. The choice is whether to spend visibly on structured delivery or invisibly on unstructured time. The visible spend buys results. The invisible spend buys experience for the senior person and not much else.

If you would like to talk about how to convert DIY AI experimentation into structured delivery, book a conversation.

Sources

  • IPSE 2024 UK day-rate survey: management consultants at £763 a day, software developers at £575. Internal team loaded rates land in similar bands once on-costs are included. Source.
  • MIT NANDA (August 2025). 95 per cent of GenAI pilots fail to deliver ROI. Study of 150 interviews and 350-employee survey, the failure-rate baseline for AI engagement risk. Source.
  • Bain & Company (April 2024). 88 per cent of business transformations fail to achieve their original ambitions. Audit of 24,000 cases, the structural backdrop for honest cost framing. Source.
  • Source Global Research (2025). The UK Consulting Market in 2025. Authoritative annual analysis of UK consulting fee benchmarks, day-rates and market sizing across specialist consulting categories including AI and data. Source.
  • McKinsey & Company (2024). From Promise to Impact, How Companies Can Measure and Realise the Full Value of AI. Five-layer measurement framework, the structural backbone for ROI defence. Source.
  • AICPA and CIMA (2026). Executive Insights on AI Opportunities and Risks. Global survey of 1,735 executives identifying operational readiness, talent infrastructure and regulatory preparedness as the principal AI capability barriers. Source.
  • ICAEW. Investment Appraisal, technical guidance for Chartered Accountants. The institutional reference behind opportunity-cost framing and capital-allocation discipline a CFO will apply to an AI investment. Source.
  • Standish Group, CHAOS Report (2020). 31 per cent of IT projects succeed on contemporary definitions; 50 per cent are challenged; 19 per cent fail. The empirical backdrop for honest engagement-success-rate framing. Source.

Frequently asked questions

What does DIY AI experimentation actually cost an SME?

The subscription cost is small (£240 to £600 a year for ChatGPT Plus or Copilot Pro). The hidden cost is the staff time. A senior person spending five hours a week experimenting at a £50 to £95 loaded hourly rate is £13,000 to £24,700 a year. Across two or three people it scales to £30,000 to £75,000 a year, with no defined deliverable.

Why does DIY AI exploration usually fail to produce results?

Three dynamics combine. No scope, so the experimentation drifts across whatever caught the team's attention this week. No success metric, so nobody can tell whether anything worked. No end date, so the time keeps accumulating. Without all three, learning compounds slowly and most of the time disappears into the gap between idea and result.

When is DIY actually the right approach?

When the goal is education rather than capability, when the use case is genuinely small (one workflow, one team), or when you are scoping a question to take to a consultant. DIY fails when it is being used as a substitute for structured delivery on a problem that needs structured delivery.

How do I convert DIY experimentation into productive AI work?

Pick one use case with a clear before-and-after measure. Give it a success metric you would actually be willing to put in front of the board. Time-box the experiment to four to six weeks. Assign one person to own it, with the founder reviewing at the end. The discipline turns five hours a week of drift into a deliverable, even if the deliverable is 'this approach does not work and here is why'.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation