The audit trail an SME actually needs, and the one it does not

An owner at her desk with two printed documents in front of her, a tools register and a policy, a pen resting on the page and a mug of tea nearby
TL;DR

The audit trail a 5 to 50 person SME actually needs is two pages of standing documentation, a tools register, a written policy, a quarterly review record, plus a per-incident note when something goes wrong. It deliberately does not include prompt-by-prompt logging, full conversation transcripts or real-time monitoring. That smaller shape answers the questions a procurement officer, the ICO or an insurer would actually ask, and is defensible under NIST, ISO 42001 and the ICO accountability framework.

Key takeaways

- The proportionate audit trail is two pages of standing documentation (tools register, policy, quarterly review record) plus a short per-incident note when something goes wrong. Not prompt-by-prompt logging. - Five questions the trail must answer: which tools we use, what data goes into them, who is authorised, what the policy says, when we review. Anything that does not serve one of those five questions is overhead and gets cut. - Enterprise GRC platforms (OneTrust, Drata, Vanta, Credo, watsonx.governance) are designed for 1,000-plus-staff firms with dozens of AI systems. At 5 to 50 staff their continuous logging is operationally catastrophic and adds no governance value the quarterly review does not deliver. - Proportionality is explicitly written into the ICO accountability framework, NIST AI RMF profiles and the EU AI Act's SME measures. A documented policy plus quarterly review plus incident capture meets the regulator's expectation when the firm is not running high-risk systems. - The whole set can be drafted in about a working week, then maintained at roughly two hours per quarter. That is the bar a procurement questionnaire actually clears.

The owner of a 12-person consultancy opens her largest client’s annual procurement questionnaire on a Tuesday morning. Question 14 asks her to describe the firm’s AI governance and audit trail. She has neither, the contract is worth a third of her revenue, and the response is due in thirty days. Her instinct is to search for an enterprise AI governance template, and the first four results recommend continuous logging platforms costing twenty thousand pounds a year upwards. None of them are designed for a firm her size.

This is where many owner-led SMEs land when AI governance first becomes a procurement question rather than a theoretical one. The published material is calibrated for the wrong end of the market. The proportionate audit trail for a 5 to 50 person firm is much smaller than the enterprise content suggests, and once you see the shape of it you can have the documentation drafted in a working week.

What is a proportionate AI audit trail?

A proportionate AI audit trail for a small firm is the documentation that lets you answer five questions in writing. Which AI tools does the business use. What categories of data go into them, and who is authorised. What the written policy says. How and when the firm reviews its AI systems. What happened and what changed when something went wrong. Two pages of standing documentation plus a per-incident note is the working shape.

The proportionate trail deliberately leaves out continuous logs of every prompt and response, full transcript archives and real-time monitoring dashboards. Those belong to a 1,000-person firm running dozens of AI systems with regulatory obligations a smaller firm does not carry. The questions a regulator, insurer or major client would actually ask are answered by the standing pack, and the trail stops there.

Why does it matter for your business?

It matters because the procurement question is now standard, the regulator’s expectation is documented governance rather than logged interactions, and the wrong response is more damaging than no response. UK public sector procurement, NHS supplier vetting and financial services due diligence have all moved AI governance from optional to expected over the last eighteen months. “We do not have formal AI governance” is the answer that loses contracts.

The Information Commissioner’s Office expects a documented, embedded approach to AI accountability, and small firms are inside that expectation when AI touches personal data. Investing in enterprise GRC software on a multi-year contract eats cash a small firm needs for revenue work, produces a data store no one reviews, and creates a false sense that governance is happening. The proportionate trail puts the firm in a defensible position without the overhead.

Where will you actually meet it?

You will meet it in three places. A procurement questionnaire that asks for the documents directly and scores the answer. A regulator query, rare for an SME but absolute when it happens, where the ICO’s first question is whether the firm has thought through its accountability. And an incident, a hallucinated client output or a paste of restricted data into a chatbot, where the question becomes what the firm’s documented position was on the day.

The standing documentation set covers the first two. A tools register lists every AI tool in use, its purpose, the data category it touches, who is authorised, where it is hosted, the data retention, and the last review date. A written policy of two to four pages sets out acceptable use, data handling, oversight, incident response, confidentiality and training expectations, referencing the register by name. A quarterly review record, completed four times a year, captures what was reviewed, what changed since last quarter, any incidents or concerns, and the recommended actions. The per-incident layer is the brief written record produced when something goes wrong, half a page to two pages, contemporaneous, signed and dated.

The frameworks worth referencing without adopting wholesale are NIST AI RMF, ISO 42001 and the ICO accountability framework. NIST’s four functions (govern, map, measure, manage) give the spine. ISO 42001 covers the management-system shape. The ICO sets the UK baseline the procurement form is implicitly testing against. The Cyber Essentials certification, if you hold it, covers the access-control layer that sits underneath the policy.

When to ask vs when to ignore

Ask for the standing documentation if you do not have it, and aim to have it drafted in a working week. Monday for the tools register, Tuesday for the data classification pass, Wednesday for the policy draft, Thursday for the quarterly review template and the first completed review, Friday for packaging the pack and a one-paragraph response to the procurement form. Quarterly maintenance from then on is roughly two hours.

Ignore the enterprise vendor narrative that you need continuous logging, full conversation transcripts, real-time monitoring and a granular change log on every configuration adjustment. At SME scale these are operationally counterproductive. A 14-person firm that logs every ChatGPT conversation has not improved its governance, it has built a data lake no one will look at. Ignore indefinite retention. One year rolling for active systems plus one year after an incident is the proportionate default. And ignore the urge to invent an AI Ethics Committee or a Model Governance Board on paper. The roles do not exist in your firm, and naming them in a policy the firm cannot execute is worse than not having the policy at all.

The narrow exception is high-risk AI use as the EU AI Act defines it, AI affecting clinical decisions, employment outcomes, credit decisions, education access, law enforcement or critical infrastructure. There the logging requirement is specific, and the vendor typically carries most of it. If you are uncertain whether your use falls inside that definition, document the reasoning, “we have assessed our AI systems and determined that none fall within the EU AI Act’s high-risk definitions”, and keep the assessment with the rest of the pack.

Three close cousins sit alongside the audit trail. The minimum viable AI policy is the five-section, two-page policy the audit trail references. The proportionate AI risk register is the one-page spreadsheet of live use cases with risk, likelihood, impact and named owner. The monthly AI governance cadence is the rhythm that turns the documents into actual oversight. And the conceptual primer on AI audit trails covers what gets recorded technically and where vendor logs end.

The four together produce more practical AI governance than many enterprise frameworks deliver on paper. The discipline is the same in each case. The policy stays short. The register stays live. The audit trail stays honest. The cadence stays in the calendar.

If you are facing a procurement form with no documentation and a thirty-day clock, or you want a peer to sense-check the pack before you send it, book a conversation.

Sources

- UK Government (2020). Guidelines for AI procurement. Buyer expectations for AI suppliers in public sector contracts. https://www.gov.uk/government/publications/guidelines-for-ai-procurement - NHS England (2024). Artificial intelligence information governance guidance. NHS procurement expectations on AI testing, documentation and oversight. https://transform.england.nhs.uk/information-governance/guidance/artificial-intelligence/ - Information Commissioner's Office. Governance and accountability in AI. The UK regulator's published expectation for documented governance, DPO oversight and periodic audit. https://ico.org.uk/for-organisations/advice-and-services/audits/data-protection-audit-framework/toolkits/artificial-intelligence/governance-and-accountability-in-ai/ - Information Commissioner's Office. Personal data breach guide, the 72-hour notification window referenced in the retention discussion. https://ico.org.uk/for-organisations/report-a-breach/personal-data-breach/personal-data-breaches-a-guide/ - National Institute of Standards and Technology (2023). AI Risk Management Framework 1.0 and Generative AI Profile (2024). Four-function structure (govern, map, measure, manage) used as the spine of SME documentation. https://www.nist.gov/itl/ai-risk-management-framework - International Organization for Standardization (2023). ISO/IEC 42001 Information technology, Artificial intelligence, Management system. International standard the SME approach maps to without adopting wholesale. https://www.iso.org/standard/42001 - European Union (2024). EU AI Act, Article 12 record-keeping and SME proportionality measures. https://artificialintelligenceact.eu/article/12/ - National Cyber Security Centre. Cyber Essentials overview. UK baseline standard for the access-control layer that wraps an SME AI policy. https://www.ncsc.gov.uk/cyberessentials/overview - McKinsey & Company (2025). The state of AI. Most firms are still standing up AI governance, so an SME with even basic documented governance is ahead of many peers. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai - International Association of Privacy Professionals (2024). IAPP AI governance survey. Confirms formal AI governance functions are still uncommon outside very large firms. https://iapp.org/news/a/the-2024-iapp-governance-survey-what-the-data-can-show-on-ai

Frequently asked questions

A client procurement form has just asked us to describe our AI governance and audit trail. We have nothing written down. What do we send?

You can assemble a defensible governance pack in about a working week. A one-page tools register listing the AI tools in use, what they do and what data goes in. A two to three page written policy covering acceptable use, data handling, oversight and incident response. A one-page quarterly review template with the first review completed and dated. Send those three documents with a paragraph explaining your governance is proportionate to a 5 to 50 person firm. That is more than most SMEs send back, and it is defensible.

Do we really not need to log every ChatGPT prompt?

For commercial off-the-shelf tools used for drafting, analysis and routine work, no. Prompt-by-prompt logging at SME scale creates a data store no one reviews and adds no governance value the quarterly review does not deliver. Where logging does matter is high-risk use, AI affecting clinical decisions, employment outcomes, credit decisions or anything inside the EU AI Act's high-risk definitions. In that narrow case the logging requirement is specific, not blanket, and the vendor usually handles it.

How long should we keep our audit documentation?

Long enough to support a breach investigation, which under UK GDPR runs on a 72-hour notification window plus proportionate retention afterwards. For an SME, one year rolling for active systems plus one year after an incident is a reasonable default. Indefinite retention of every log is operationally pointless. Get the retention rule written into your policy, and the question stops being a quarterly worry.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation