The owner of a 12-person consultancy opens her largest client’s annual procurement questionnaire on a Tuesday morning. Question 14 asks her to describe the firm’s AI governance and audit trail. She has neither, the contract is worth a third of her revenue, and the response is due in thirty days. Her instinct is to search for an enterprise AI governance template, and the first four results recommend continuous logging platforms costing twenty thousand pounds a year upwards. None of them are designed for a firm her size.
This is where many owner-led SMEs land when AI governance first becomes a procurement question rather than a theoretical one. The published material is calibrated for the wrong end of the market. The proportionate audit trail for a 5 to 50 person firm is much smaller than the enterprise content suggests, and once you see the shape of it you can have the documentation drafted in a working week.
What is a proportionate AI audit trail?
A proportionate AI audit trail for a small firm is the documentation that lets you answer five questions in writing. Which AI tools does the business use. What categories of data go into them, and who is authorised. What the written policy says. How and when the firm reviews its AI systems. What happened and what changed when something went wrong. Two pages of standing documentation plus a per-incident note is the working shape.
The proportionate trail deliberately leaves out continuous logs of every prompt and response, full transcript archives and real-time monitoring dashboards. Those belong to a 1,000-person firm running dozens of AI systems with regulatory obligations a smaller firm does not carry. The questions a regulator, insurer or major client would actually ask are answered by the standing pack, and the trail stops there.
Why does it matter for your business?
It matters because the procurement question is now standard, the regulator’s expectation is documented governance rather than logged interactions, and the wrong response is more damaging than no response. UK public sector procurement, NHS supplier vetting and financial services due diligence have all moved AI governance from optional to expected over the last eighteen months. “We do not have formal AI governance” is the answer that loses contracts.
The Information Commissioner’s Office expects a documented, embedded approach to AI accountability, and small firms are inside that expectation when AI touches personal data. Investing in enterprise GRC software on a multi-year contract eats cash a small firm needs for revenue work, produces a data store no one reviews, and creates a false sense that governance is happening. The proportionate trail puts the firm in a defensible position without the overhead.
Where will you actually meet it?
You will meet it in three places. A procurement questionnaire that asks for the documents directly and scores the answer. A regulator query, rare for an SME but absolute when it happens, where the ICO’s first question is whether the firm has thought through its accountability. And an incident, a hallucinated client output or a paste of restricted data into a chatbot, where the question becomes what the firm’s documented position was on the day.
The standing documentation set covers the first two. A tools register lists every AI tool in use, its purpose, the data category it touches, who is authorised, where it is hosted, the data retention, and the last review date. A written policy of two to four pages sets out acceptable use, data handling, oversight, incident response, confidentiality and training expectations, referencing the register by name. A quarterly review record, completed four times a year, captures what was reviewed, what changed since last quarter, any incidents or concerns, and the recommended actions. The per-incident layer is the brief written record produced when something goes wrong, half a page to two pages, contemporaneous, signed and dated.
The frameworks worth referencing without adopting wholesale are NIST AI RMF, ISO 42001 and the ICO accountability framework. NIST’s four functions (govern, map, measure, manage) give the spine. ISO 42001 covers the management-system shape. The ICO sets the UK baseline the procurement form is implicitly testing against. The Cyber Essentials certification, if you hold it, covers the access-control layer that sits underneath the policy.
When to ask vs when to ignore
Ask for the standing documentation if you do not have it, and aim to have it drafted in a working week. Monday for the tools register, Tuesday for the data classification pass, Wednesday for the policy draft, Thursday for the quarterly review template and the first completed review, Friday for packaging the pack and a one-paragraph response to the procurement form. Quarterly maintenance from then on is roughly two hours.
Ignore the enterprise vendor narrative that you need continuous logging, full conversation transcripts, real-time monitoring and a granular change log on every configuration adjustment. At SME scale these are operationally counterproductive. A 14-person firm that logs every ChatGPT conversation has not improved its governance, it has built a data lake no one will look at. Ignore indefinite retention. One year rolling for active systems plus one year after an incident is the proportionate default. And ignore the urge to invent an AI Ethics Committee or a Model Governance Board on paper. The roles do not exist in your firm, and naming them in a policy the firm cannot execute is worse than not having the policy at all.
The narrow exception is high-risk AI use as the EU AI Act defines it, AI affecting clinical decisions, employment outcomes, credit decisions, education access, law enforcement or critical infrastructure. There the logging requirement is specific, and the vendor typically carries most of it. If you are uncertain whether your use falls inside that definition, document the reasoning, “we have assessed our AI systems and determined that none fall within the EU AI Act’s high-risk definitions”, and keep the assessment with the rest of the pack.
Related concepts
Three close cousins sit alongside the audit trail. The minimum viable AI policy is the five-section, two-page policy the audit trail references. The proportionate AI risk register is the one-page spreadsheet of live use cases with risk, likelihood, impact and named owner. The monthly AI governance cadence is the rhythm that turns the documents into actual oversight. And the conceptual primer on AI audit trails covers what gets recorded technically and where vendor logs end.
The four together produce more practical AI governance than many enterprise frameworks deliver on paper. The discipline is the same in each case. The policy stays short. The register stays live. The audit trail stays honest. The cadence stays in the calendar.
If you are facing a procurement form with no documentation and a thirty-day clock, or you want a peer to sense-check the pack before you send it, book a conversation.



