A 40-person professional services firm. The owner has just looked at the SaaS spend report. There are 47 active subscriptions. Three of them do roughly the same thing. Two were renewed automatically and nobody can remember why. The AI platform from last year sits at line 31, still being charged monthly, last used in October. He had been planning to research a second AI consultant. He is now wondering whether the consultant is the bottleneck at all.
He is not the only one with this report. The instinct after a failed AI engagement is to find a better tool, a better consultant, a better engagement shape. The data says the highest-leverage move is something else entirely. Step backward first. Cut the application count and stabilise the stack. Then the second engagement has the foundation it needs.
How big is the tool sprawl problem actually?
The average company runs 89 applications. Mid-market companies run 137. Enterprises run 200 or more. Shadow IT, the unsanctioned tools that staff have signed up for individually, adds another 30 to 50 percent on top of the official portfolio. These figures come from Waymaker OS’s 2026 analysis, drawing on cross-sector SaaS portfolio data. Most SMEs sit between the small and mid-market range, which means somewhere between 89 and 137 applications quietly running.
Inside that portfolio, half of what is paid for goes unused. CloudEagle’s research finds that 53 percent of SaaS licences sit idle, with 30 to 35 percent of SaaS spend disappearing into licences nobody touches. In some environments, up to 73 percent of provisioned users never use their assigned software at all. The portfolio is not just larger than people think. It is also less utilised.
The pattern compounds because nobody owns the whole picture. Procurement signs the contracts. IT integrates the tools. Department heads request the additions. Finance approves the renewals. Nobody has the cross-cut view to ask “do we still need this, and if so why is nobody logging in”.
What does fragmentation cost in real money?
The productivity drag is the larger number, by an order of magnitude. The Waymaker analysis estimates that tool fatigue costs the average knowledge worker 51 minutes per week. Over one in five workers lose two or more hours weekly. Fully loaded at £75 per hour, that comes out to roughly $937,500 a year in a 100-person company. Even at SME scale, £200,000 to £400,000 a year is conservative for a 30-to-50-person firm.
The licence waste sits on top. Gartner research finds that organisations waste 25 to 30 percent of SaaS spend on unused licences, duplicated functionality across overlapping tools, shelfware that was purchased but never deployed, and premium tiers when basic features would suffice. For an SME spending £500,000 a year on SaaS, that is £125,000 to £150,000 in annual waste before counting the productivity drag.
Then the governance overhead, which is invisible until it is not. Access reviews when someone leaves. Compliance documentation across dozens of vendors. Vendor risk assessments. Data retention policies that vary by tool. Exit procedures when one of them gets acquired. Each application is a small contract, a relationship, a set of integrations, and a support channel. Multiply by 137 and the operational drag is substantial.
How does fragmented data limit what AI can do?
This is the technical answer that explains the financial one. AI effectiveness scales with data accessibility. A company with a unified data environment can deploy AI that understands context across all of its work. A company with 89 fragmented applications can only deploy AI that works inside individual silos, because the data connectivity infrastructure does not exist to support cross-silo intelligence.
Waymaker’s analysis estimates AI in a unified environment delivers roughly four times the productivity benefit of AI in a fragmented one.
This is what often went wrong in the first engagement, even when nobody named it. The AI system was asked to make decisions based on data that lived in five different places, had different definitions across those places, and was never reconciled into a consistent model. The technical team spent six months building integrations, only to discover that data quality from some sources made the model unreliable. Or the model worked initially and then went stale because the integration broke when a source system updated its API.
These are not failures of AI. They are failures of data infrastructure. Before deploying another AI system, the underlying fragmentation has to be addressed.
What does a consolidation roadmap actually look like?
Zylo’s framework is the cleanest available: inventory, identify, plan, execute, monitor. Inventory means a complete, centralised view of the SaaS environment, including tools acquired through informal channels. Useful sources are accounts payable records, expense reports, SSO logs, contracts and renewal schedules, current spend, and usage metrics by user and department.
Identify means targeting categories with overlapping tools, low adoption, and disproportionate spend. The firm pays for three project management tools but actively uses one. Two document collaboration platforms generate constant friction about which to use for what. A pilot AI tool from last year is still being charged. These are the obvious targets.
Plan means structured, phased execution with milestones, ownership across IT and business units, communication strategies, compliance considerations for data transfer or contract termination, and risk mitigation covering data migration and user enablement. Start with low-risk, low-usage applications to build momentum, then progressively address higher-impact changes. Four to six months is realistic for a meaningful portfolio reduction at SME scale, less because the work is technically complex and more because staff need time to transition workflows.
Why does sequencing matter?
A second AI engagement run on a still-fragmented stack inherits the fragmentation. The new vendor builds the same brittle integrations. The same data quality problems surface. The same silos block cross-function intelligence. The second engagement produces the same flat dashboard twelve months later, for the same underlying reasons.
A second engagement run on a consolidated stack benefits from four times the AI leverage, lower governance overhead, and a data environment that the new system can actually use. The consolidation work is not glamorous. It is rationalising what is already there. The financial and operational impact is substantial enough on its own that the work pays for itself even before the next AI engagement starts.
The next post in the cluster covers the trust rebuild that runs alongside the consolidation work, particularly when the failed first engagement has damaged staff confidence in technology decisions. The diagnostic audit typically surfaces consolidation as the top priority when the audit lands honestly.
If you would like to walk through how consolidation sequencing might apply to your stack, book a conversation.



