Why your knowledge base goes stale six months in

A senior practice manager at a desk reviewing a printed list with a red pen and a laptop showing a Notion page open
TL;DR

Knowledge base AI delivers 1 to 2 hours of recovered time per employee per week when the underlying knowledge is well maintained, and produces hallucinated, misleading results when it is not. The technology has matured; the governance has not. The firm-side prereq is a named maintainer with quarterly review responsibility, not a better platform. Without it, the knowledge base goes stale at six to twelve months and the AI on top makes the staleness worse, not better.

Key takeaways

- Knowledge bases without an assigned maintainer become unreliable at predictable intervals: questionable at 6 months, untrustworthy at 12. Staff stop using them once trust collapses. - AI-on-stale-data is the failure pattern. AI search returns the most relevant document regardless of whether it is current. A confident, plausibly-wrong answer is worse than no answer. - Governance prereq: a named maintainer (typically a senior manager or practice lead), quarterly review responsibility, an explicit "what belongs here" scope. 2 to 4 hours per quarter for a 5 to 15 person firm. - Focus on the 20 percent of knowledge used 80 percent of the time. Capture-everything approaches create noise rather than signal. - Relevance gap: AI search on well-maintained knowledge achieves 85 to 95 percent relevance. On poorly maintained, 40 to 60 percent. The drop is not gradual; it accelerates once staleness exceeds a threshold. - SME tool sweet spot: Notion AI at £10 per user per month bundled, Glean at £10 to £50 per user per month for cross-system semantic search. Total firm cost typically £50 to £500 monthly.

A 12-person consulting firm built a Notion knowledge base with all their playbooks, case studies, and process documents. Notion AI on top, semantic search, the works. Eight months in, a junior consultant queried "our approach to lease accounting" and got a confident answer based on a 2023 document the firm had since superseded. The junior used it on a client deliverable. The senior caught it on review. The knowledge base did not lie; the maintainer who would have removed the old document did not exist.

This is the knowledge base failure pattern that almost every owner-led services firm hits between month six and month twelve. The technology is mature. The platform works. The AI search layer is fast and impressive. None of it survives without a person responsible for keeping the underlying content current, and that person rarely exists in firms under fifty people.

Why does the knowledge base go stale at six months?

Most SME firms invest in a knowledge management platform during a moment of operational ambition. Onboarding a new practice manager, formalising a process, prepping for growth. Content gets uploaded in a burst. The platform looks alive for the first few months. Around month six, new content slows. Around month nine, content starts to be out of date. Around month twelve, staff stop using the platform because they cannot trust what they find.

The reason this happens is straightforward. Nobody was assigned to maintain the knowledge base. New procedures get written but never uploaded. Old procedures get superseded but never removed. The platform has no mechanism to know what is current and what is stale, and the owner never set up a review cadence that would have caught the drift.

The 6 to 12 month staleness curve is predictable enough that it should be a planning input. If the firm is not willing to assign a maintainer, the knowledge base will not survive. Better to know that before the platform fee is spent.

What does AI-on-stale-data look like?

AI search returns the most relevant document, regardless of whether it is current. The relevance ranking is based on semantic match, not freshness. So a 2023 document on lease accounting that no longer reflects the firm's current approach can outrank a 2026 procedural memo if the older document has more matching keywords or longer prose.

The failure mode is invisible to the user. The AI delivers a confident answer with what looks like the right level of detail. The user trusts the answer. The error compounds when the user acts on it. By the time the senior reviewer catches the mistake (if they catch it at all), the answer has been used in a client deliverable or a strategic decision.

This is worse than no knowledge base. With no knowledge base, the user knows they have to ask a colleague or check the source. With a stale knowledge base plus AI search, the user thinks they have the answer and stops looking. Confidence in a wrong answer beats uncertainty in the right one.

What does the named-maintainer rule require?

A senior manager or practice lead assigned to be responsible for the knowledge base. Reviews submissions, resolves conflicts, removes outdated information on a quarterly basis. For a 5 to 15 person firm, the workload is 2 to 4 hours per quarter, plus ad hoc updates when major procedural changes happen.

The maintainer's responsibility is curatorial, not encyclopaedic. They decide what belongs, what does not, and what needs updating, without writing all the content themselves. They have the authority to retire old documents. They have a calendar reminder to do the quarterly review. The owner has signed off on the time.

Without this rule, the knowledge base reverts to whoever was most enthusiastic at the start, who eventually has other priorities, and the platform decays. With it, the platform stays useful indefinitely, because the curatorial discipline matches the rate of organisational change.

What is the 80/20 scope rule?

Capture the 20 percent of knowledge used 80 percent of the time, not everything that exists. A knowledge base that attempts to capture everything (every email, every document, every chat) becomes noise rather than signal. Staff cannot find what they need because too much irrelevant content surrounds it.

The 20 percent for most professional services firms includes standard operating procedures and processes (onboarding, project delivery, compliance), case law or sector precedents relevant to the firm's practice, client information templates, project playbooks, training materials and resource guides. Specific to the work the firm actually does, not generic.

Scoping the knowledge base this way solves two problems. The maintainer's quarterly review becomes feasible. The AI search layer has cleaner content to work with. Both effects compound: cleaner scope means cleaner search, which means more usage, which means more value, which justifies the maintenance time.

Which tool fits a small services firm?

Notion AI at £10 per user per month is the easy choice if the firm is already using Notion for project management or documentation. The AI features are bundled with the existing platform, so there is no additional integration. For a 5 to 15 person firm already on Notion, this is a £50 to £150 monthly add-on.

Glean at £10 to £50 per user per month sits one tier up. It searches across all the productivity systems the firm uses (email, file shares, CRM, Slack, web links) rather than a single platform. The advantage is comprehensive search; the disadvantage is integration complexity and a higher monthly cost. Worth it when the firm has knowledge spread across many systems and a maintainer to keep it current.

Mem and similar platforms occupy the middle ground at similar pricing. Confluence and Coda are the more traditional document repositories with AI add-ons. The choice between them is mostly about which one the team will actually open. A more powerful platform that nobody uses delivers less than a simpler platform that everyone uses.

What does the relevance accuracy gap look like?

AI knowledge systems trained on well-maintained content achieve 85 to 95 percent relevance when returning results to natural-language queries. The user gets the right document or paragraph the first time in nine cases out of ten. Search time drops from 15 to 45 minutes to 2 to 5 minutes when the underlying data is current and well-organised, which is most of the operational benefit.

If the underlying knowledge is poorly maintained or incomplete, relevance drops to 40 to 60 percent. The user gets a misleading or outdated answer in four to six cases out of ten. The drop is not gradual. There is a threshold around 30 percent stale content where relevance collapses, because the AI's confidence ranking is misled by the volume of stale-but-relevant matches.

This is why the maintainer rule is non-negotiable. Without it, the firm pays for an AI search layer that delivers worse-than-random results within a year. With it, the firm pays for a search layer that delivers usable results indefinitely. The technology is the same. The governance is the difference.

Where do compliance gates apply?

If the knowledge base contains sensitive client information, regulatory guidance, or compliance procedures, access must be controlled by role and permission. The firm must ensure only authorised staff can access partner-level strategic information or client-confidential content. Most major platforms (Notion, Glean) support role-based access; the firm has to configure it correctly.

If the knowledge base is used to train or guide staff on regulatory compliance (AML, GDPR, data protection), the firm must ensure the content is accurate and current. If outdated guidance is in the knowledge base and staff follow it, the firm is liable for non-compliance regardless of whether AI was involved. The maintainer's quarterly review covers this risk.

For knowledge bases that include AI-generated or AI-summarised content, the summaries must be reviewed before being added to production. An AI summary of a complex regulatory guidance document might miss important nuances or produce an inaccurate condensation. Human review is required before AI-generated content is added to the production knowledge base.

If you are deploying a knowledge base or trying to work out why your existing one stopped being useful, the answer is rarely about the platform. It is about who owns the curation. Book a conversation.

Sources

  • Glean, what is an internal knowledge base and how to set one up. Source.
  • Glean, best practices for implementing AI in knowledge management systems. Source.
  • Tess Group, AI compliance UK businesses 2026 guide. Source.
  • Law Society of Ireland, generative AI guidance. Source.
  • Brynjolfsson, E., Li, D. and Raymond, L. (2023). Generative AI at Work, NBER Working Paper 31161. Empirical productivity study showing 14 per cent average gain with 34 per cent for low-skilled workers, the basis for sector-specific AI productivity claims. Source.
  • McKinsey & Company (2024). From Promise to Impact, How Companies Can Measure and Realise the Full Value of AI. Five-layer measurement framework for evaluating sector AI deployments. Source.
  • Goldman Sachs (2023). Generative AI could raise global GDP by 7 per cent. Cross-sector productivity-paradox research, the macroeconomic context for sector-level AI ROI claims. Source.
  • Boston Consulting Group (2026). When Using AI Leads to Brain Fry. Study of 1,488 US workers across large companies on AI oversight load, error rates, decision overload and intent to quit. Source.

Frequently asked questions

Why do knowledge bases go stale?

Because nobody is responsible for keeping them current. Most SME firms invest in a knowledge management platform but assign no maintainer. After 6 to 12 months, content gets out of date, staff cannot trust what they find, and usage drops. The technology is rarely the problem. The governance is.

What does the named-maintainer rule require?

A senior manager or practice lead assigned to be responsible for the knowledge base. Reviews submissions, resolves conflicts, removes outdated information on a quarterly basis. For a 5 to 15 person firm, this is 2 to 4 hours per quarter, plus ad hoc updates as content changes. The rule is the discipline that protects the AI search layer's reliability.

Which tool fits a small services firm?

Notion AI at £10 per user per month is the easy choice if the firm is already using Notion for project management. Glean at £10 to £50 per user per month is more powerful for cross-system semantic search across email, file shares, and CRM. For larger firms with sophisticated integration needs, Glean wins. For most 5 to 15 person firms, Notion AI is enough.

What does AI add over a normal knowledge base?

Semantic search across all stored content. A junior staff member queries 'our approach to lease accounting' and gets relevant excerpts from the right documents, without needing to know which file contains the answer. Search time drops from 15 to 45 minutes to 2 to 5 minutes when the knowledge is well-maintained. When it is not, the AI confidently returns outdated content.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation