Measuring data and knowledge readiness, four questions to revisit each quarter

An owner and two colleagues at a small meeting table with a printed sheet of questions, a coffee cup and an open notebook, mid-conversation
TL;DR

Owners who measure data and knowledge readiness in a structured way ship better AI outcomes than owners who treat it as a feeling. The measurement is four questions, answered honestly by the owner and one or two senior people every quarter. Can the team find the latest version of any key document in under sixty seconds. Are glossary disagreements being resolved or have they come back. What proportion of AI output the team has to correct before using. What new data or knowledge debt has accumulated in the last ninety days.

Key takeaways

- Data decay is silent and accumulates without obvious signal. Six months after a clean-up, discipline slips and nobody notices until a tool starts disappointing again. Measurement is the forcing function that catches the drift early. - The measurement is not a dashboard. It is four questions, answered honestly by the owner and one or two senior people, every quarter, in a thirty-minute meeting on a fixed calendar slot. - Question one is findability. Can a typical team member locate the latest version of a few business-critical documents in under sixty seconds on average. If retrieval time is drifting up, the document discipline is slipping. - Question two is glossary health. Are live disagreements on customer state, deal stage and work status being resolved, or have they quietly come back. Question three is AI output correction rate, the share of AI output the team has to fix before using. - Question four is the new debt that has accumulated in the last ninety days. Triage it at the review, assign owners, decide what to fix, what to monitor and what to escalate. The discipline is the difference between sustained improvement and silent regression.

She finished the ninety-day clean-up in March. The CRM was tidied, the shared drive was reorganised, the glossary was written down, the first AI tools started returning answers the team trusted. By September she could feel the discipline slipping. Files were drifting back to people’s desktops. A new joiner had started using the old word for what the team now called something else. The forecasting tool’s outputs had started looking off again, and nobody was sure when that began.

She wanted a structured way to catch the drift before it undid the work. Not a dashboard, not a scorecard. Something she could run in thirty minutes with two senior colleagues, every quarter, that told her honestly whether the readiness she had paid for in spring was holding.

The answer is four questions. Each one cuts through a different layer of data and knowledge decay. None of them needs specialist tooling, and all of them are answerable by the owner and one or two senior people in a fixed quarterly slot.

Why does data and knowledge readiness need measurement more than other operational areas?

Because the decay is silent. A cash crunch shows up in the bank account, hiring missteps create immediate friction, churn surfaces in the pipeline. Data decay does none of these things. It accumulates quietly while everything else looks fine, and by the time it is visible it is also expensive. Salesforce found fewer than one in five organisations rate their data as ready for AI, and the trend is getting worse, not better.

The proportional cost at SME scale is real. A doubletrack analysis of 8.36 million US businesses puts the annual cost of poor data quality at roughly $4,912 per employee per year, which translates to a £49,000 to £98,000 hit on a ten to twenty-person team. Forty-two per cent of companies scrapped a majority of their AI initiatives in 2025, up from 17 per cent the year before, with poor data quality named as the primary reason. Acceldata reports the share of data professionals citing poor data quality as their top challenge rose from 41 to 57 per cent between 2022 and 2024. The decay is accelerating.

Question one, can the team find the latest version in under sixty seconds?

Pick three to five documents the business genuinely depends on this quarter. A current pricing list, a contract template, a statement of works, a recent client engagement report. Ask a team member who did not organise the system to find each one, and time them. If they average over sixty seconds the discipline is slipping, and if they cannot find the latest version at all, the firm has a data availability problem already costing time.

Findability is a decision-making issue, not a filing issue. Paperwise reports that employees in knowledge-intensive work often spend up to 30 per cent of their time searching for information, which on a fifteen-person team is close to two full-time equivalents. The risk runs beyond lost time and into using the wrong version. Compliance documents, pricing sheets and scope statements that exist in multiple versions produce client commitments made against wrong terms, invoices priced incorrectly and scope creep that nobody catches until billing.

The fix is rarely a new platform. It is a small intervention by whoever owns the question this quarter. Simplifying a folder structure, retraining the team on the naming convention, archiving the old versions properly. If retrieval has drifted from thirty seconds to ninety, that is a signal worth a fortnight. If it is holding, leave it alone.

Question two, are glossary disagreements actually being resolved?

In the past ninety days, have disagreements on definitions of core business terms surfaced. Customer state, deal stage, project status, invoice status. Were they resolved, or did they go quiet without being fixed. How many are still open. If the leadership answer is “not many” or “they resolve themselves”, the glossary governance is probably drifting and a backlog is quietly forming. If the answer is “yes, and we have a process”, the culture is holding.

The reason this matters is that definition misalignment silently corrupts every metric that depends on the definition. If sales reads a deal at “proposal” and operations reads it as “in scoping”, the forecast is wrong before anyone notices. OvalEdge documents the case-study pattern, when departments define the same metric differently, dashboards conflict and audit exposure rises, and when alignment is enforced through an actively-reconciled glossary, reporting accuracy improves and KPI disputes drop. The same dynamic plays out at SME scale, just with smaller numbers and fewer dashboards.

The owner-scale version is a list of five to ten core definitions, one named owner who maintains the glossary, and a quarterly answer to the question above. If new conflicts keep surfacing, the answer is rarely a complex governance model, it is a clearer one. Someone owns the glossary, someone maintains it, and someone has authority to resolve conflicts when they emerge.

Question three, what proportion of AI output does the team have to correct before using it?

Pick one AI tool the team genuinely uses. Across the people using it, track for a month how often the output went out as-is and how often it needed editing before use. Below 20 per cent correction is a tool earning its keep and adoption should expand. Above 50 per cent and it is creating rework disguised as automation, and the right move is to pause, fix the data feeding it, or replace it.

Workday research found that approximately 40 per cent of the time saved through AI tools is offset by the extra work created fixing AI-generated output, and the rework typically falls to the most experienced people in the team. Fourdots puts the annual global cost of AI hallucinations at $67.4 billion, with financial-task hallucination rates running 15 to 25 per cent without proper guardrails. The cost compounds, the most engaged people end up doing the correction, and trust in the tool degrades.

The fix is rarely a better AI tool. It is usually better data and governance feeding the existing one. If a deal-forecasting model is fed historical deals with inconsistent stages, the forecast will be inconsistent. The quarterly question is whether the correction rate is trending up or down across the year, which tells the owner whether the underlying data quality is improving or degrading.

Question four, what new data or knowledge debt has accumulated in the last ninety days?

What looks like new debt. What is not quite broken yet but is trending toward broken. What new silos are forming. What definitions are drifting again. What integrations are creating unexpected data patterns. Each quarter the leadership team triages. Some items will be trivial and can be ignored, some will need immediate action, and the bulk will sit in the middle, worth monitoring and worth a named owner.

This question matters because debt compounds like financial debt. The longer it sits unprioritised, the more costly it becomes, and ERP Today’s World Economic Forum framing makes the human cost explicit, every hour a senior person spends deciphering ungoverned legacy data is an hour not spent on higher-value work. A well-run quarterly cycle catches new debt early, when it is still cheap. A poorly-run one waits until a tool starts disappointing and then runs a forensic investigation that takes weeks.

The discipline is the schedule. A thirty-minute meeting, same date every quarter, owner plus two senior colleagues plus whoever runs the systems holding critical data. The four questions are shared in advance. Each answer feeds three to five action items with named owners and deadlines, reviewed at the next quarter. That structure turns “everyone should care about data” into “this person is tracking this metric, this quarter, for this team”, which is the difference between sustained improvement and silent regression. The Gainsight quarterly business review framework is a reasonable starting point.

This is the measurement that pays. It will not sell you a platform and will not look impressive on a slide. What it does do is run on the schedule owner-operated firms can actually keep, and catch drift before it costs money. Want a hand designing the quarterly rhythm for your firm? Book a conversation.

Sources

- Funnel.io (2024). Bad data quality is silently bleeding your business. Cited for the silent-decay framing and the IBM $3.1 trillion annual cost of poor data quality in the US economy. https://funnel.io/blog/bad-data-quality-is-silently-bleeding-your-business - Doubletrack (2025). The hidden cost of dirty data. Cited for the Gartner large-enterprise $12.9 million annual figure, the proportional $4,912 per employee per year SME estimate, and the 42 per cent of companies scrapping AI initiatives in 2025 statistic. https://www.doubletrack.com/post/hidden-cost-dirty-data - Salesforce (2024). Measure your data readiness. Cited for the finding that fewer than one in five organisations have a high level of data readiness and only 9 per cent are fully prepared for AI data integration. https://www.salesforce.com/blog/measure-your-data-readiness/ - Acceldata (2025). How AI data quality reporting cuts errors and drives growth. Cited for the 57 per cent of data professionals citing poor data quality as their top challenge in 2024, up from 41 per cent in 2022. https://www.acceldata.io/blog/how-ai-data-quality-reporting-cuts-errors-and-drives-growth - Paperwise (2024). Version control, stop losing time searching for the right document. Cited for the up to 30 per cent of time knowledge workers spend searching for information, and the scope-creep and compliance risk framing of findability. https://www.paperwise.com/version-control-stop-losing-time-searching-for-the-right-document/ - OvalEdge (2024). Enterprise business glossary alignment. Cited for the case-study evidence on KPI dispute reduction, audit exposure drop and reporting accuracy improvements when shared glossaries are actively reconciled. https://www.ovaledge.com/blog/enterprise-business-glossary-alignment - CIO.com (2025). 40 per cent of AI productivity gains lost to rework for errors. Cited for the Workday research on AI output rework and the finding that engaged employees disproportionately carry the correction load. https://www.cio.com/article/4157471/40-of-ai-productivity-gains-lost-to-rework-for-errors.html - Fourdots (2024). Business impact of AI hallucinations, rates and ranks. Cited for the $67.4 billion annual global cost of hallucinations and the 15 to 25 per cent financial-task hallucination rate without guardrails. https://fourdots.com/business-impact-of-ai-hallucinations-rates-and-ranks - ERP Today (2024). The hidden cost of AI, why data debt is actually a human problem. Cited for the World Economic Forum IT-leadership framing of data debt as a compounding human cost, not a purely technical one. https://erp.today/the-hidden-cost-of-ai-why-data-debt-is-actually-a-human-problem/ - Gainsight. The essential guide to quarterly business reviews (QBRs). Cited as the operating-rhythm reference for embedding a fixed quarterly readiness review into the leadership calendar with named owners and tracked actions. https://www.gainsight.com/essential-guide/quarterly-business-reviews-qbrs/

Frequently asked questions

Why four questions and not a proper data readiness dashboard?

Because dashboards for data readiness in a 5 to 50 person business almost always become unmaintained inside two quarters. The four questions are answerable in thirty minutes, by the owner and one or two senior people, with no specialist tooling. They generate the action that matters. A dashboard generates the meeting that schedules the action that matters. For owner-operated firms the dashboard is the wrong unit cost. The discipline is the schedule, not the visualisation.

What is a reasonable target for the AI output correction rate?

Below 20 per cent is a tool genuinely earning its keep and adoption should expand. Between 20 and 50 per cent is a working tool with a data quality problem worth investigating. Above 50 per cent and the tool is creating rework disguised as automation, and the right move is to pause adoption, fix the data feeding it, or replace the tool. Workday research finds that roughly 40 per cent of time saved by AI tools is offset by the work created fixing AI-generated output, so this measurement is not academic.

Who should be in the quarterly review meeting?

The owner, one or two senior team members who use the systems daily, and whoever has informal responsibility for the systems holding business-critical data. Three to five people, no more. Print or share the four questions a day or two before so people can think about them rather than answering off the cuff. The meeting is thirty minutes, ends with three to five action items each with a named owner and a deadline, and is fixed on the same calendar date each quarter.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation