A 12-person consulting firm built a Notion knowledge base with all their playbooks, case studies, and process documents. Notion AI on top, semantic search, the works. Eight months in, a junior consultant queried "our approach to lease accounting" and got a confident answer based on a 2023 document the firm had since superseded. The junior used it on a client deliverable. The senior caught it on review. The knowledge base did not lie; the maintainer who would have removed the old document did not exist.
This is the knowledge base failure pattern that almost every owner-led services firm hits between month six and month twelve. The technology is mature. The platform works. The AI search layer is fast and impressive. None of it survives without a person responsible for keeping the underlying content current, and that person rarely exists in firms under fifty people.
Why does the knowledge base go stale at six months?
Most SME firms invest in a knowledge management platform during a moment of operational ambition. Onboarding a new practice manager, formalising a process, prepping for growth. Content gets uploaded in a burst. The platform looks alive for the first few months. Around month six, new content slows. Around month nine, content starts to be out of date. Around month twelve, staff stop using the platform because they cannot trust what they find.
The reason this happens is straightforward. Nobody was assigned to maintain the knowledge base. New procedures get written but never uploaded. Old procedures get superseded but never removed. The platform has no mechanism to know what is current and what is stale, and the owner never set up a review cadence that would have caught the drift.
The 6 to 12 month staleness curve is predictable enough that it should be a planning input. If the firm is not willing to assign a maintainer, the knowledge base will not survive. Better to know that before the platform fee is spent.
What does AI-on-stale-data look like?
AI search returns the most relevant document, regardless of whether it is current. The relevance ranking is based on semantic match, not freshness. So a 2023 document on lease accounting that no longer reflects the firm's current approach can outrank a 2026 procedural memo if the older document has more matching keywords or longer prose.
The failure mode is invisible to the user. The AI delivers a confident answer with what looks like the right level of detail. The user trusts the answer. The error compounds when the user acts on it. By the time the senior reviewer catches the mistake (if they catch it at all), the answer has been used in a client deliverable or a strategic decision.
This is worse than no knowledge base. With no knowledge base, the user knows they have to ask a colleague or check the source. With a stale knowledge base plus AI search, the user thinks they have the answer and stops looking. Confidence in a wrong answer beats uncertainty in the right one.
What does the named-maintainer rule require?
A senior manager or practice lead assigned to be responsible for the knowledge base. Reviews submissions, resolves conflicts, removes outdated information on a quarterly basis. For a 5 to 15 person firm, the workload is 2 to 4 hours per quarter, plus ad hoc updates when major procedural changes happen.
The maintainer's responsibility is curatorial, not encyclopaedic. They decide what belongs, what does not, and what needs updating, without writing all the content themselves. They have the authority to retire old documents. They have a calendar reminder to do the quarterly review. The owner has signed off on the time.
Without this rule, the knowledge base reverts to whoever was most enthusiastic at the start, who eventually has other priorities, and the platform decays. With it, the platform stays useful indefinitely, because the curatorial discipline matches the rate of organisational change.
What is the 80/20 scope rule?
Capture the 20 percent of knowledge used 80 percent of the time, not everything that exists. A knowledge base that attempts to capture everything (every email, every document, every chat) becomes noise rather than signal. Staff cannot find what they need because too much irrelevant content surrounds it.
The 20 percent for most professional services firms includes standard operating procedures and processes (onboarding, project delivery, compliance), case law or sector precedents relevant to the firm's practice, client information templates, project playbooks, training materials and resource guides. Specific to the work the firm actually does, not generic.
Scoping the knowledge base this way solves two problems. The maintainer's quarterly review becomes feasible. The AI search layer has cleaner content to work with. Both effects compound: cleaner scope means cleaner search, which means more usage, which means more value, which justifies the maintenance time.
Which tool fits a small services firm?
Notion AI at £10 per user per month is the easy choice if the firm is already using Notion for project management or documentation. The AI features are bundled with the existing platform, so there is no additional integration. For a 5 to 15 person firm already on Notion, this is a £50 to £150 monthly add-on.
Glean at £10 to £50 per user per month sits one tier up. It searches across all the productivity systems the firm uses (email, file shares, CRM, Slack, web links) rather than a single platform. The advantage is comprehensive search; the disadvantage is integration complexity and a higher monthly cost. Worth it when the firm has knowledge spread across many systems and a maintainer to keep it current.
Mem and similar platforms occupy the middle ground at similar pricing. Confluence and Coda are the more traditional document repositories with AI add-ons. The choice between them is mostly about which one the team will actually open. A more powerful platform that nobody uses delivers less than a simpler platform that everyone uses.
What does the relevance accuracy gap look like?
AI knowledge systems trained on well-maintained content achieve 85 to 95 percent relevance when returning results to natural-language queries. The user gets the right document or paragraph the first time in nine cases out of ten. Search time drops from 15 to 45 minutes to 2 to 5 minutes when the underlying data is current and well-organised, which is most of the operational benefit.
If the underlying knowledge is poorly maintained or incomplete, relevance drops to 40 to 60 percent. The user gets a misleading or outdated answer in four to six cases out of ten. The drop is not gradual. There is a threshold around 30 percent stale content where relevance collapses, because the AI's confidence ranking is misled by the volume of stale-but-relevant matches.
This is why the maintainer rule is non-negotiable. Without it, the firm pays for an AI search layer that delivers worse-than-random results within a year. With it, the firm pays for a search layer that delivers usable results indefinitely. The technology is the same. The governance is the difference.
Where do compliance gates apply?
If the knowledge base contains sensitive client information, regulatory guidance, or compliance procedures, access must be controlled by role and permission. The firm must ensure only authorised staff can access partner-level strategic information or client-confidential content. Most major platforms (Notion, Glean) support role-based access; the firm has to configure it correctly.
If the knowledge base is used to train or guide staff on regulatory compliance (AML, GDPR, data protection), the firm must ensure the content is accurate and current. If outdated guidance is in the knowledge base and staff follow it, the firm is liable for non-compliance regardless of whether AI was involved. The maintainer's quarterly review covers this risk.
For knowledge bases that include AI-generated or AI-summarised content, the summaries must be reviewed before being added to production. An AI summary of a complex regulatory guidance document might miss important nuances or produce an inaccurate condensation. Human review is required before AI-generated content is added to the production knowledge base.
If you are deploying a knowledge base or trying to work out why your existing one stopped being useful, the answer is rarely about the platform. It is about who owns the curation. Book a conversation.



