An owner of a 28 person services firm rings her accountant about something else and gets a parting comment: “You should probably have an AI risk register by now.” She comes off the call, opens a browser tab, and starts reading. Forty minutes later she has skimmed three enterprise GRC templates that read like they were written for HSBC, a NIST framework with four functions and several hundred sub-controls, and a consultancy whitepaper recommending a dedicated AI governance officer. She closes the tab. Her firm has 28 people. She does not have a governance officer. She has a head of operations who already does five jobs.
This is the moment many small business owners reach the AI governance question. The gap between what the enterprise-scale templates ask for and what a 5 to 50 person firm can actually maintain is wide enough to swallow the project. A proportionate register, six to ten lines on a single page, sized for the leadership that actually exists, is the answer that closes that gap.
What is a proportionate AI risk register?
A proportionate AI risk register is a one-page document listing six to ten specific AI risks the business actually carries, each with a named owner, the current mitigation, and an observable early warning sign. It is reviewed quarterly inside the normal business review meeting. It is a working tool, not a compliance artefact, and it lives in a shared file the leadership team can edit.
The format matters less than the discipline. A Google Sheet, an Excel tab, a Notion page, all fine. What carries the weight is the five-column shape: the risk in one line, the owner by name, the current mitigation in two sentences, the early warning sign that would tell you the risk is moving, and a trigger that would force an out-of-cycle review. ISO 31000 and the NIST AI Risk Management Framework both describe this shape at length; the proportionate register is the SME-sized implementation.
Why does it matter for your business?
The regulator has made clear that your firm remains accountable for what your AI does, regardless of who built the tool. The Information Commissioner’s Office, in its 2024 AI and data protection guidance, states that organisations deploying third-party AI services remain the data controller for any personal data those services touch. If a staff member pastes a client record into a free chatbot, the firm carries the breach.
The same logic plays out across other risk surfaces. Air Canada argued in 2024 that its chatbot’s incorrect statement about bereavement fares was the chatbot’s problem, not the airline’s. The Canadian regulator disagreed and ordered the refund. The Mata v Avianca sanctions in 2023 established the same principle in US federal court for AI-generated legal citations. A proportionate register is the document that turns “we are accountable for what our AI does” from an abstract regulatory statement into six lines a working management team can actually monitor.
What belongs on the page?
Six risks recur in nearly every well-run SME register. Data leak via free-tier AI tools, the Samsung 2023 pattern. Intellectual property exposure on AI-generated client work. Hallucination in customer-facing output, the Air Canada and Mata v Avianca pattern. Shadow AI use by staff. Regulatory misalignment when serving regulated jurisdictions. And operational dependency on a single AI vendor whose pricing or terms could change overnight.
Each of these is grounded in a real incident, not theory. Samsung confirmed in 2023 that employees had transferred source code and design diagrams to ChatGPT and the data had been incorporated into model training. The Mata v Avianca sanctions established hallucination liability in US federal court the same year. Italy’s Data Protection Authority showed in 2023 that an individual EU member state can take enforcement action against an AI service used by deployers. Cybereason’s 2024 survey reported that around 73 per cent of organisations were aware of shadow AI use among their staff, with limited visibility into which tools were actually in play. Each of those incidents now reads as a register line with an owner, a mitigation, and an early warning sign that the firm watches for.
What does not belong on the page?
Existential AGI scenarios, AI sentience debates, and theoretical model bias in contexts far from your actual use. These are real intellectual questions and they belong in policy journals, not on a working document for a 25 person firm. Including them dilutes attention on the risks the business actually carries and makes the register read as performative. The page should cover risks that are material, plausible, and distinct from each other.
Supply-chain AI risks created by your suppliers’ use of AI, where you are not the deployer, are also better handled through vendor contracts and service-level agreements than through your own register. Your register addresses risks your firm creates through its own decisions. The line is not always perfectly clean, but the principle holds: a single page can only carry a small number of items, and the ones that earn space are the ones a named owner inside the firm can actually act on.
Theoretical model performance risks belong in the same category. A speculative scenario that the next generation of a language model will hallucinate in some new way is not a useful register entry. The register assumes the tools and capabilities that exist now, with the known failure modes, and updates as those change. If a vendor announces a material model upgrade, that is an out-of-cycle review trigger. If the academic literature reports a new vulnerability class that maps onto a tool the firm uses, the operations owner adds a line. Until either of those happens, the page stays focused on the risks the firm is actually carrying this quarter.
How do you keep it alive?
A 45 minute meeting every quarter, with the same three or four people, in the same calendar slot, with the same agenda. Review what was committed last quarter and whether it actually happened. Walk each line top to bottom, has the risk moved, has any early warning sign been observed, is the current mitigation still in place. Add any new risk. Retire any the firm has closed out.
The discipline that keeps the register alive, rather than becoming shelfware within two quarters, is bolting it onto the normal business review cycle. If quarterly business reviews already happen, the AI register gets fifteen minutes inside the existing meeting. It does not need its own calendar invite, its own preparation pack, or its own steering group. The owner-manager who wrote the register the first quarter is the same owner-manager reviewing it the fourth quarter, and the document gradually becomes part of how the firm runs rather than a thing the firm has to maintain.
If you are sitting with a blank page and would like to talk through what your firm’s first six lines should say, book a conversation.



