A 30-staff services firm I work with signed a £5,000-a-year contract for an AI customer-support tool last spring. By month eight the finance director had a problem. Real year-one spend was sitting at £31,000. Licence £5,000, integration build with the firm’s ticketing system £15,000, data clean-up across eighteen months of inconsistently tagged tickets £2,000, training and change management £2,000, ongoing prompt tuning £3,000, and a forced rebuild when the vendor sunset the model the integration depended on at month eight cost another £4,000.
She was not angry at the vendor. She was angry at the procurement template that had asked for the licence price and not asked for any of the rest. The conversation that followed was the one this post is about.
What is total cost of ownership for AI?
Total cost of ownership for AI is everything the deployment costs from procurement through retirement, not just the licence on the vendor quote. Eight categories make up the real bill: licence, API tokens, integration, data preparation, prompt tuning, governance, training and change management, and ongoing maintenance including forced model upgrades. Each one is a real cost. None of them appear on the headline quote.
The frame is borrowed from enterprise procurement, where TCO has been a discipline since the ERP rollouts of the 2000s. AI complicates the picture for two reasons. Vendor pricing has shifted from fixed seats to consumption-based tokens, which means cost scales with usage. And the model layer underneath the platform is deprecated and replaced on a twelve-to-eighteen-month cycle, which forces rebuild work older software never imposed.
The Paylocity definition captures it cleanly: TCO is acquisition cost plus operating cost plus indirect cost plus lifecycle cost, summed across the period of use. For AI in 2026, the operating and lifecycle lines are typically larger than the acquisition line, which inverts the intuition many owners bring from their existing software stack.
Why does it matter for your business?
It matters because the gap between quoted price and real cost is consistent enough that Gartner and IDC now price it into their forecasts. IDC predicts organisations will underestimate AI infrastructure costs by thirty percent through 2027. Gartner’s 2026 figure for AI project abandonment is sixty percent, driven by the cost gap rather than technology failure. For a UK SME, getting it wrong depletes the working capital the next quarter needs.
The pattern across SmartDev, Articsledge and Hypersense research in 2026 is consistent. Year-one AI deployments at SME scale run three to five times the quoted licence on average, with integration the single largest variable. A vendor quote at £5,000 with a real year-one cost of £25,000 represents one to two percent of revenue, against the 0.2 percent the procurement template implied. That difference tips a good business case into a politically difficult one halfway through delivery.
The strategic cost is worse than the financial one. Firms that have been burned by an AI overrun become institutionally sceptical of further AI investment, even where the next case is sound. A disciplined TCO conversation at procurement prevents that arc.
Where will you actually meet the eight categories?
You meet them in roughly this order through year one. The licence is the first, typically £3,000 to £15,000 a year for SME-scale tools. API token consumption is the second, £600 to £18,000 a year depending on volume and model choice. Integration is the third, two to four times the licence at £8,000 to £30,000 for any tool connecting to your CRM, ticketing system or document store.
Data preparation is the fourth, often £2,000 to £10,000 for the deduplication, tagging and quality work the AI needs before it produces useful output. Prompt engineering and tuning is the fifth, typically £3,000 to £8,000 a year of internal time once you account for the four-or-so hours a week someone needs to spend keeping the system useful. Governance and compliance is the sixth, £2,000 to £6,000 in regulated sectors for the audit trails, DPIA work and policy that the ICO’s AI guidance increasingly expects.
Training and change management is the seventh, the line owners frequently skimp on. McKinsey recommends fifteen to twenty-five percent of year-one budget for this category, and firms that allocate less typically see adoption flatline. Ongoing maintenance and forced upgrades is the eighth, the rebuild cost when the vendor deprecates the model underneath your integration. GPT-4o and Claude 3 sunset cycles through 2025 gave firms sixty to ninety days to migrate, with rebuild work running £3,000 to £10,000 per cycle.
When to build a TCO model versus skip it
The decision rule is by deployment size. Below £5,000 quoted licence, a back-of-envelope check is enough: multiply by four and ask whether that fits the budget comfortably. Between £5,000 and £15,000 the same multiplier holds, but a vendor conversation about the integration estimate is worth having before signing. Above £15,000, build a formal model with all eight categories before any contract goes near a board.
Three exceptions push formal TCO down to small-scale deployments. Anything customer-facing, because the cost of getting it wrong includes brand damage rather than budget. Anything in a regulated sector, because the governance line carries audit risk. And anything you would need to defend to a board, because the discipline of building the model is the discipline that survives the conversation.
The opposite exception also matters. For a fixed-budget pilot with a clear three-month decision point, a detailed TCO model is overkill and slows down learning. The right questions for a pilot are the three Gartner suggests: what is the total committed spend, what does minimum viable deployment look like, and how will we measure success. Defer formal TCO to the production business case.
Related concepts
Vendor lock-in is the silent TCO category sitting underneath everything else. A 2026 Zapier survey of 542 executives found ninety percent expected to switch AI vendors easily, and only forty-two percent reported smooth migrations when they tried. Switching cost is a real line in any multi-year TCO model, and vendors with the highest switching costs tend to exercise pricing power once you are committed.
Hybrid AI pricing is the pattern many vendors have moved to in 2026: a fixed platform fee plus consumption-based token charges on top. Hybrid pricing makes TCO forecasting harder because the variable line scales with usage in ways the fixed line does not. Knowing which lines are fixed and which are variable in your contract is the first step to a credible budget.
Per-seat versus usage-based pricing is the underlying choice that drives the shape of the TCO model. Per-seat is predictable but rewards low-intensity users and penalises power users. Usage-based rewards careful prompting but punishes scaling. Many SME deployments end up paying both at once, which is the hybrid pattern above.
The 2-to-4x rule for AI consulting engagements is the same arithmetic in a different setting. The visible consulting fee is roughly a third of the real cost of the engagement once tooling, internal time, data preparation, change management and regulatory work are added in. The categories rename, the multiplier holds.
The point of all of this is to give the budget conversation a structure that survives contact with delivery. A formal TCO model is a forecasting tool, not a vendor weapon, and the vendors who engage with it openly are usually the ones worth working with. If you want to talk through where the eight categories sit for a specific tool you are evaluating, book a conversation.



