The proportionate AI risk register for a 5 to 50 person business

An owner-manager and an operations lead sitting at an office table with a single printed sheet of A4 between them, two mugs of tea, a closed laptop to one side, a wall calendar in the background
TL;DR

A proportionate AI risk register at SME scale is a one-page document with six to ten lines, each carrying a specific risk, a named owner, the current mitigation, and an early warning sign. It is reviewed in a 45 minute quarterly meeting alongside the normal business review, and six risks recur on nearly every well-run version of the page.

Key takeaways

- A proportionate SME register is one page, six to ten lines, each with a specific risk, a named owner, current mitigation, and an early warning sign. - Six risks recur in nearly every well-run SME register: free-tier data leak, IP exposure on AI client work, hallucination in customer-facing output, shadow AI use, regulatory misalignment in served jurisdictions, vendor dependency. - Existential AGI scenarios, AI sentience debates, and theoretical bias far from your actual use case do not belong on an SME register. They dilute focus on the real, current risks. - Each line names an owner who already runs that functional area, the head of operations for tool risk, the head of delivery for client-facing accuracy, the managing director for regulatory and policy. - The quarterly review is a 45 minute meeting inside the normal business review cycle, not a separate compliance ritual. Without that rhythm the register becomes shelfware within two quarters.

An owner of a 28 person services firm rings her accountant about something else and gets a parting comment: “You should probably have an AI risk register by now.” She comes off the call, opens a browser tab, and starts reading. Forty minutes later she has skimmed three enterprise GRC templates that read like they were written for HSBC, a NIST framework with four functions and several hundred sub-controls, and a consultancy whitepaper recommending a dedicated AI governance officer. She closes the tab. Her firm has 28 people. She does not have a governance officer. She has a head of operations who already does five jobs.

This is the moment many small business owners reach the AI governance question. The gap between what the enterprise-scale templates ask for and what a 5 to 50 person firm can actually maintain is wide enough to swallow the project. A proportionate register, six to ten lines on a single page, sized for the leadership that actually exists, is the answer that closes that gap.

What is a proportionate AI risk register?

A proportionate AI risk register is a one-page document listing six to ten specific AI risks the business actually carries, each with a named owner, the current mitigation, and an observable early warning sign. It is reviewed quarterly inside the normal business review meeting. It is a working tool, not a compliance artefact, and it lives in a shared file the leadership team can edit.

The format matters less than the discipline. A Google Sheet, an Excel tab, a Notion page, all fine. What carries the weight is the five-column shape: the risk in one line, the owner by name, the current mitigation in two sentences, the early warning sign that would tell you the risk is moving, and a trigger that would force an out-of-cycle review. ISO 31000 and the NIST AI Risk Management Framework both describe this shape at length; the proportionate register is the SME-sized implementation.

Why does it matter for your business?

The regulator has made clear that your firm remains accountable for what your AI does, regardless of who built the tool. The Information Commissioner’s Office, in its 2024 AI and data protection guidance, states that organisations deploying third-party AI services remain the data controller for any personal data those services touch. If a staff member pastes a client record into a free chatbot, the firm carries the breach.

The same logic plays out across other risk surfaces. Air Canada argued in 2024 that its chatbot’s incorrect statement about bereavement fares was the chatbot’s problem, not the airline’s. The Canadian regulator disagreed and ordered the refund. The Mata v Avianca sanctions in 2023 established the same principle in US federal court for AI-generated legal citations. A proportionate register is the document that turns “we are accountable for what our AI does” from an abstract regulatory statement into six lines a working management team can actually monitor.

What belongs on the page?

Six risks recur in nearly every well-run SME register. Data leak via free-tier AI tools, the Samsung 2023 pattern. Intellectual property exposure on AI-generated client work. Hallucination in customer-facing output, the Air Canada and Mata v Avianca pattern. Shadow AI use by staff. Regulatory misalignment when serving regulated jurisdictions. And operational dependency on a single AI vendor whose pricing or terms could change overnight.

Each of these is grounded in a real incident, not theory. Samsung confirmed in 2023 that employees had transferred source code and design diagrams to ChatGPT and the data had been incorporated into model training. The Mata v Avianca sanctions established hallucination liability in US federal court the same year. Italy’s Data Protection Authority showed in 2023 that an individual EU member state can take enforcement action against an AI service used by deployers. Cybereason’s 2024 survey reported that around 73 per cent of organisations were aware of shadow AI use among their staff, with limited visibility into which tools were actually in play. Each of those incidents now reads as a register line with an owner, a mitigation, and an early warning sign that the firm watches for.

What does not belong on the page?

Existential AGI scenarios, AI sentience debates, and theoretical model bias in contexts far from your actual use. These are real intellectual questions and they belong in policy journals, not on a working document for a 25 person firm. Including them dilutes attention on the risks the business actually carries and makes the register read as performative. The page should cover risks that are material, plausible, and distinct from each other.

Supply-chain AI risks created by your suppliers’ use of AI, where you are not the deployer, are also better handled through vendor contracts and service-level agreements than through your own register. Your register addresses risks your firm creates through its own decisions. The line is not always perfectly clean, but the principle holds: a single page can only carry a small number of items, and the ones that earn space are the ones a named owner inside the firm can actually act on.

Theoretical model performance risks belong in the same category. A speculative scenario that the next generation of a language model will hallucinate in some new way is not a useful register entry. The register assumes the tools and capabilities that exist now, with the known failure modes, and updates as those change. If a vendor announces a material model upgrade, that is an out-of-cycle review trigger. If the academic literature reports a new vulnerability class that maps onto a tool the firm uses, the operations owner adds a line. Until either of those happens, the page stays focused on the risks the firm is actually carrying this quarter.

How do you keep it alive?

A 45 minute meeting every quarter, with the same three or four people, in the same calendar slot, with the same agenda. Review what was committed last quarter and whether it actually happened. Walk each line top to bottom, has the risk moved, has any early warning sign been observed, is the current mitigation still in place. Add any new risk. Retire any the firm has closed out.

The discipline that keeps the register alive, rather than becoming shelfware within two quarters, is bolting it onto the normal business review cycle. If quarterly business reviews already happen, the AI register gets fifteen minutes inside the existing meeting. It does not need its own calendar invite, its own preparation pack, or its own steering group. The owner-manager who wrote the register the first quarter is the same owner-manager reviewing it the fourth quarter, and the document gradually becomes part of how the firm runs rather than a thing the firm has to maintain.

If you are sitting with a blank page and would like to talk through what your firm’s first six lines should say, book a conversation.

Sources

- Information Commissioner's Office (2024). Guidance on AI and data protection. Establishes that organisations deploying third-party AI tools remain the data controller and bear GDPR responsibility regardless of who built the tool. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/ - Information Commissioner's Office (2023). Data protection impact assessments (DPIAs). DPIA expectations for higher-risk AI uses, including recruitment, performance evaluation, and customer-facing automated decisions. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/accountability-and-governance/data-protection-impact-assessments-dpias/ - National Cyber Security Centre (2024). Guidelines for secure AI system development. UK national guidance on AI security risks, supply chain exposure, and data exfiltration through AI systems. https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development - EU AI Act (2024). Article 9, Risk management system requirements for deployers of high-risk AI systems. Directly applicable to UK firms serving EU customers in scope. https://artificialintelligenceact.eu/article/9/ - National Institute of Standards and Technology (2023). AI Risk Management Framework (AI RMF 1.0). The Govern, Map, Measure, Manage structure that a one-page SME register implements in proportionate form. https://www.nist.gov/itl/ai-risk-management-framework - International Organisation for Standardisation (2018). ISO 31000:2018 risk management principles. The international standard for proportionate, organisation-fit risk management that the SME register operationalises at small scale. https://www.iso.org/standard/65694.html - Reuters (2023). Samsung bans use of generative AI tools after employee data leak. The reference incident behind the free-tier data leak line item, confidential code uploaded to ChatGPT and incorporated into training. https://www.reuters.com/technology/samsung-bans-use-generative-ai-tools-such-chatgpt-after-data-leak-2023-05-02/ - Courthouse News (2023). Lawyer sanctioned for citing fake ChatGPT cases, Mata v Avianca. Federal court sanctions for filing AI-generated citations to non-existent cases, the canonical hallucination liability precedent for any firm putting AI output in front of customers. https://www.courthousenews.com/lawyer-sanctioned-citing-fake-chatgpt-cases-in-court/ - BBC News (2024). Air Canada must pay refund after chatbot's mistake. Canadian regulator finding that an airline remained liable for inaccurate chatbot statements about its bereavement fare policy. https://www.bbc.co.uk/news/business-68285307 - Garante per la protezione dei dati personali (2023). Italian DPA temporary ChatGPT ban decision. EU member-state enforcement of GDPR transparency and lawful basis against an AI service used by deployers, the canonical regulatory misalignment precedent. https://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9870832

Frequently asked questions

How many lines should a small business AI risk register have?

Six to ten. Fewer than five and you are probably missing material risks like vendor dependency or regulatory misalignment. More than twelve and the register dilutes into a watchlist nobody reads. Six is a reasonable starting point for a 5 to 25 person firm; eight to ten suits a firm closer to 50 staff with multiple service lines or regulated customer segments.

Who owns the register in a firm without a compliance team?

The managing director signs the register. Each line item gets a named owner from the existing leadership, the head of operations for tool and data risk, the head of delivery for customer-facing accuracy and IP, the MD for regulatory and policy. Naming an owner per line, rather than a single risk lead, is the load-bearing move. Each owner already runs that functional area; the register just makes the AI dimension of their existing role explicit.

How often should the register be reviewed?

Quarterly, in a 45 minute meeting bolted onto the normal business review, with an out-of-cycle review triggered by any incident, client complaint, regulatory enquiry, or material change in tool or vendor terms. Quarterly is frequent enough to keep the document live without becoming bureaucratic overhead. Anything monthly tends to drift into theatre; anything less than quarterly and the register goes stale before the AI landscape does.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation