Why your team can't agree on the numbers

A senior team meeting around a table with papers, laptops, and mugs, two people leaning over a printed report
TL;DR

The hidden cost of tool sprawl is what your team can't do because no two systems agree on the same number. Building a single source of truth fixes the underlying issue. The fix is a metrics handbook: each core metric defined once, sourced from one system, reported on one cadence. Order matters: do this before AI or analytics, because both stall on fragmented data.

Key takeaways

- The visible cost of tool sprawl is the SaaS bill. The actual cost is the time, decisions, and senior-team trust lost when no two systems agree on the same number. Average employee spends 102 minutes a day searching for information; five working weeks a year are lost to context switching. - Forty-three percent of C-level executives report finding their information unreliable, against thirty-two percent of junior staff. The people making the largest decisions have the least confidence in the inputs. - Consolidating tools alone doesn't fix the underlying issue. Tool sprawl is a symptom. The disease is fragmented ownership of the data. Cutting from twenty-three subscriptions to twelve still leaves eight versions of utilisation. - The fix is a metrics handbook: six to ten core metrics, each with one definition, one canonical source, and one reporting cadence. Single source of truth comes before AI and analytics; AI on top of fragmented data is expensive and produces outputs nobody trusts.

Monday standup, sixty-person services firm. Every week begins with fifteen minutes of arguing about which version of utilisation is the real one. Sales sees one number. Finance sees another. Operations runs a third on a Google Sheet the ops lead built three years ago. The founder asks which is right and gets three plausible answers from three smart people. Nobody is wrong. Everybody is reading from a different source.

The pain feels like tool sprawl. The firm has twenty-three SaaS subscriptions. The bill is small theatre. The actual cost is the fifteen minutes every Monday, multiplied across the year, multiplied by every other meeting where the same argument breaks out about a different number.

Why does the SaaS bill miss the real cost?

The SaaS bill is the visible cost of tool sprawl. The actual cost is what your team can’t do because no two systems agree on the same number. More than half of SMB leaders report frequent data inconsistencies caused by silos. That’s not a fringe problem; it’s the median experience. The bill, in comparison, is a rounding error against what the team is losing in hours, decisions, and trust.

The numbers underneath this are bigger than founders typically imagine. The average employee spends 102 minutes a day searching for information needed to do the job. Five working weeks a year are lost to context switching for a typical knowledge worker. In a forty-person firm, that’s the equivalent of two full-time staff who exist only to chase information. None of that shows up on the SaaS bill.

Where the trust deteriorates first is at the top. Forty-three percent of C-level executives report finding their information unreliable, against thirty-two percent of more junior staff. That’s the worst possible inversion. The people making the largest decisions have the least confidence in the data underneath them.

The cumulative cost lands as a hesitation tax. Decisions get deferred because the data feels uncertain. Forecasts get hedged. Hiring slows because nobody can confirm the utilisation picture. Pricing reviews get pushed because the margin numbers don’t reconcile. None of that is a tool problem. All of it is what happens when the senior team can’t trust their own data.

A 2024 small business piece on this captured the morning experience precisely: by the time you’ve checked Shopify for sales, QuickBooks for cash flow, HubSpot for leads, the Google Sheet your ops lead built last week, and your inbox for the numbers your accountant sent over, you’ve already lost twenty-five minutes and it’s not yet 8am. That’s a typical founder Monday before the standup even starts.

Why doesn’t consolidating tools fix it?

The default response to feeling overwhelmed by tooling is to consolidate vendors. That misses the actual problem. You can cut from twenty-three subscriptions to twelve and still have eight versions of utilisation. Tool sprawl is the surface symptom. The disease underneath is fragmented ownership of the data. Until somebody owns the question “what does utilisation mean and where does it come from”, new tools just create new silos at lower cost.

The pattern that creates tool sprawl in the first place explains why consolidation doesn’t undo it. Tools get added under pressure to solve immediate crises. Sales needs a CRM. Finance needs a different general ledger. Operations spins up a project tracker. Each is a sensible local decision. None of them is owned at the firm level. The result is twenty-three sensible local decisions that produce one collective mess.

The fix is structural and cheaper than people expect. It starts with a single question: which metrics actually drive decisions in this business. Usually the list is shorter than founders predict. Six to ten core metrics. Utilisation, project margin, pipeline value, cash position, retention, capacity. For each of those, decide three things: what does it mean exactly, where is the canonical number sourced from, and on what cadence does it get reported. Once those three things are written down for each metric, the data has a spine the rest of the business can hang off.

Tools then become a question of fit. The metrics handbook is the thing that doesn’t move. Tools come and go, but the canonical definition stays put.

What does a single source of truth actually look like?

A single source of truth is a small, deliberate set of agreed definitions, owned data sources, and one canonical pipeline for the metrics that drive decisions. The high-performing version is what some firms call a metrics handbook: each metric defined once, sourced from one system, reported on one cadence, and matched to the decisions it actually informs.

A worked example helps. Take “utilisation” as the canonical first metric. Definition: billable hours divided by available hours, both measured at week granularity. Source of truth: the time-tracking system, with the rule that hours not in the time-tracking system don’t exist for this calculation. Cadence: reported weekly to the senior team in a single place, not three. Decision relevance: utilisation under sixty-five percent triggers a hiring pause; over eighty-five percent triggers a hiring conversation.

Repeat for project margin, pipeline value, cash position, retention, and capacity. Each gets one canonical definition. Each gets one source. Each gets one cadence. Edge cases get noted, not avoided. When the source data needs cleaning, that’s a known piece of work, not a permanent argument.

Once that handbook exists, the Monday standup changes. Nobody argues about which utilisation number is real. The number is the number. If the team disagrees about what the number means strategically, that’s a useful conversation, and it’s the conversation they should be having instead of the one about which spreadsheet to trust.

The handbook is short. Most firms can write a v1 in two days of focused work. The discipline is in maintaining it as the business changes, which means assigning it an owner. Without an owner, the handbook drifts, the silos return, and the standup goes back to arguing about utilisation.

Why does this have to happen before AI?

AI on top of fragmented data is the most expensive way to discover that the data was the problem all along. Founders who skip the single-source-of-truth work and bolt analytics or AI onto the existing mess end up with one of two outcomes. They spend heavily to clean the data as part of the AI project, often more than the AI itself costs. Or they ship AI outputs that nobody trusts because the inputs disagree.

The order is single source of truth first, AI second. Skipping the order is what makes most AI projects feel expensive without producing decisions. The AI works as advertised. The outputs land in front of the senior team. Nobody acts on them, because the senior team has already learned not to trust the underlying numbers. The investment delivers a technical success and a commercial nothing.

The reverse order produces the opposite shape. With the metrics handbook in place and one canonical pipeline per metric, the AI work has somewhere clean to plug into. A churn model that uses agreed retention numbers gets used. A pricing recommendation that comes from the canonical project margin pipeline gets adopted. A capacity forecast built on the official utilisation source gets included in the hiring conversation.

The same pattern applies to ordinary analytics. Dashboards that draw from the canonical sources get looked at. Dashboards that draw from one of three competing sources get ignored, because the senior team has been trained that data they can’t trust gets defended in the room rather than acted on quietly.

The compounding return on the metrics handbook is what makes it the highest-leverage piece of business systems work most founders aren’t doing. It’s the work that makes everything downstream of it work.

The Monday standup that doesn’t argue about numbers is a small thing in isolation. Across a year it’s hundreds of decision-hours returned, a senior team that trusts the data again, and a foundation that any future AI, analytics, or reporting work can sit on without rework. The order is the foundation first.

If the standup feels familiar, book a conversation. The metrics handbook v1 work usually takes two focused days, and the return shows up the next Monday.

Sources

  • "Data silos in small business", blog.analysisgpt.ai (blog.analysisgpt.ai/data-silos-small-business), pre-8am multi-tool morning experience, more than half of SMB leaders report frequent data inconsistencies.
  • Starmind, Future of Work report (starmind.ai), 102 minutes a day searching for information, 43% C-level vs 32% junior find information unreliable. Source.
  • Harvard Business Review via Conclude.io, 5 working weeks per year lost to context switching, 40% productive time consumed by chronic multitasking (conclude.io/blog/context-switching-is-killing-your-productivity).
  • PCI, "Agency operations: 7 challenges hurting profitability" (pci.us/agency-operations-7-challenges-hurting-profitability), 33% of agency employees say tech stack had no productivity impact.
  • Scoro, misaligned metrics commentary (scoro.com/blog/misaligned-metrics), metrics handbook framing.
  • NinjaOne, "From tool sprawl to unified tech stack" (ninjaone.com/blog/from-tool-sprawl-to-unified-tech-stack), tool evaluation trap framework.
  • McKinsey & Company (2025). The State of AI Global Survey. 88 per cent of organisations now use AI in at least one function but only 39 per cent report enterprise-level EBIT impact. Source.
  • Boston Consulting Group (2025). Are You Generating Value from AI, The Widening Gap. Five per cent of future-built firms achieve five times the revenue gains and three times the cost reductions of peers. Source.

Frequently asked questions

My SaaS bill is the visible problem. Why isn't cutting subscriptions the fix?

Because the underlying issue is that no two systems agree on the same number, not that there are too many systems. Cutting from twenty-three to twelve still leaves eight competing versions of utilisation. The fix is to define each metric once, source it from one system, and report it on one cadence. Tool count comes second.

What's a metrics handbook and how do I build one?

A short document, one to two pages, listing the six to ten metrics that actually drive decisions in your business. For each metric: definition, canonical source, reporting cadence, and which decisions it informs. Most firms can write a v1 in two focused days. The discipline is assigning an owner who maintains it as the business changes.

Should I do AI before or after building a single source of truth?

After, every time. AI on fragmented data either gets paused while you spend heavily to clean the inputs, or ships outputs that nobody trusts because the underlying numbers disagree. With a metrics handbook in place, AI plugs into clean canonical sources and the outputs get used.

How long does this take to land?

V1 of the handbook takes two days. The behaviour change in the senior team takes a few weeks of consistent reference back to it. Within a quarter the Monday standup stops arguing about which number is real, which is the visible signal that it's working.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation