The one-page AI risk register that fits an SME

An MD at a desk with a laptop showing a blank document, a notepad with handwritten draft categories beside, papers and a coffee cup on the desk
TL;DR

The AI risk register at SME scale is the one-page document that forces the firm to systematically consider what can go wrong and how it would respond. Six to eight categories cover the ground: data leakage, hallucination, bias, IP, vendor lock-in, regulatory non-compliance, reputational, and security. Each row gets likelihood, impact, mitigation, owner, review date. The MD owns the document; the operations lead maintains it. Monthly informal review for high-likelihood/high-impact items, quarterly formal review for the rest.

Key takeaways

- Six to eight risk categories cover the AI ground at SME scale: data leakage, hallucination/misinformation, bias and discrimination, IP and copyright, vendor lock-in, regulatory non-compliance, reputational, security (prompt injection / model misuse). - One-page format: Risk (one-line description), Likelihood (Low/Medium/High), Impact (Low/Medium/High), Mitigation, Owner, Review Date. - MD owns the register and signs off. Operations lead maintains it day-to-day. - Cadence: monthly informal review for high-likelihood/high-impact rows, quarterly formal review for the full register, annual comprehensive audit. Total 30-45 minutes per quarter for the formal session. - Escalation triggers named in the register: if three incidents on the same tool in a quarter, the tool is paused. If a vendor announces a breach, exposure audited within 48 hours. - The register is an active management tool, not a defensive document for after-the-fact litigation.

The MD of a 35-person consultancy at his Friday-afternoon planning hour. He has heard the phrase “AI risk register” three times this month: from his accountant, from the firm’s insurance broker, and from a client who runs a regulated business. He has not built one. He sits down to start. He opens a blank Word document and types “AI Risk Register”. He freezes. He does not know what categories should be on the page. He has read enough enterprise GRC content to know that direction is overkill. The blank page sits between him and the first category.

This is the practical SME governance moment most owners reach without anyone naming it for them. The register itself is straightforward: six to eight categories, one row each, a likelihood, an impact, a mitigation, an owner, a review date. The work is filling in the rows specifically enough to be useful, sparingly enough to be maintained.

What categories should be on the register?

Six to eight cover the AI ground at SME scale. Data leakage (personal data, client confidential information, or proprietary information exposed via AI). Hallucination and misinformation (AI generating wrong content that gets relied on). Bias and discrimination (AI decisions systematically disadvantaging protected groups). Intellectual property and copyright (training data claims, output overlap with proprietary content). Vendor lock-in (data and process dependence on a single vendor). Regulatory non-compliance (UK GDPR, sector regulators, professional standards).

Two more categories complete the typical SME set. Reputational (customer or market reaction to AI use or misuse, particularly around undisclosed AI-generated content). Security (prompt injection attacks, unauthorised model use, data poisoning). For most 10-50 person SMEs, these eight categories are the complete picture. Adding more rows tends to dilute the register; the firm spends time on speculative risks and misses the active ones.

What does the one-page format look like?

Six columns. Risk (a one-line description). Likelihood (Low, Medium, or High). Impact (Low, Medium, or High). Mitigation (the control in place to reduce the risk). Owner (who is responsible for managing this risk). Review Date (when the row gets revisited).

The whole document fits on one page or one spreadsheet tab. A worked row, fully populated:

Risk Likelihood Impact Mitigation Owner Review
Employee inputs client data into free ChatGPT, exposing confidential information High High Policy forbids client data in free tools; paid commercial tier provided; monthly team check-in on AI use; data classification reference table circulated MD Monthly

Six to eight rows like this is the whole document. Each row should be short enough that the reader can take it in at a glance. If a row needs more than 30 words to describe the risk and its mitigation, the row is doing too much work and probably wants splitting.

Who owns the register and what is the rhythm?

The MD owns overall AI governance responsibility and signs the register. The operations lead (or equivalent) maintains it day-to-day, tracks incidents, and surfaces issues for the monthly check-in. If the firm has a designated compliance or IT lead, they can fill the operations-lead role; in most 10-50 person firms, that role does not exist as a separate function.

The cadence is three-tiered. Monthly informal review for high-likelihood high-impact rows: 15-30 minutes, the operations lead surfaces anything new, the MD makes any decisions needed. Quarterly formal review of the full register: 30-45 minutes, MD plus operations lead, every row walked through and updated. Annual comprehensive audit: 1-2 hours, the policy and register reassessed against the firm’s actual AI use over the year.

What about escalation triggers?

The register names rules for when to stop using a tool or escalate to professional advice. Without named triggers, the firm ends up making case-by-case decisions under pressure when something has gone wrong, which is the worst time to be deciding policy.

Two examples of useful triggers. “If three incidents occur within a quarter on the same AI tool, the tool is paused pending review.” This converts a pattern of small incidents into a clear stopping rule. “If a vendor announces a security breach, our exposure is audited within 48 hours and the MD is informed within 4 hours of the announcement.” This converts an external event into a defined internal action. Each trigger names a threshold and a response.

How does the register actually get used?

Two worked examples show the register operating. A contract-review tool hallucinated a case citation that left the firm. The register was updated to add mandatory partner review on AI-generated legal output, the data leakage mitigation was strengthened, and the review cadence on that tool moved from quarterly to monthly.

A separate incident involved an accountant who used free ChatGPT for a client memo and accidentally CC’d the wrong recipient. Register updated to forbid free tools for any client information, paid ChatGPT subscription funded, firm-wide training brief refreshed. The pattern is the same in both cases: the register turns each incident into a documented mitigation, so six months later the firm can show what was learned and what changed.

What does the register stay clear of?

Three confusions are worth heading off. Enterprise GRC programmes belong to enterprises with dedicated governance staff. Quarterly board packs belong to firms with a board to read them. Defensive documents filed against future litigation belong to firms expecting litigation. The SME register sits in a different category: an active management tool for the next 12 months of AI operating decisions, sized for the staff and structure that actually exist.

The register changes as the firm’s AI use changes. Rows are added when new tools enter the firm. Rows are retired when tools leave. Mitigations are updated when something works better than expected, and tightened when an incident has shown the existing mitigation was insufficient.

If you are sitting at a Friday-afternoon desk with the blank page open and you would like to talk through what the first eight rows should say for the firm you actually run, book a conversation.

Sources

  • OWASP LLM Top 10 (2025). Source.
  • UK GDPR Article 22 automated decision-making. Source.
  • NIST AI Risk Management Framework. Source.
  • ICO AI risk toolkit. Source.
  • National Institute of Standards and Technology (2023). AI Risk Management Framework (AI RMF 1.0). Establishes measurement rigour and uncertainty quantification as core governance practice. Source.
  • National Association of Corporate Directors (2025). AI Friend and Foe, Director's Handbook on AI Oversight. Foundational governance principles for board-level AI oversight, transparency, risk frameworks and stakeholder communication. Source.
  • Chartered Governance Institute UK (2024). Artificial Intelligence and the Governance Professional. UK governance perspective on lawful, ethical and responsible AI use embedded within risk management frameworks. Source.
  • CEPS (2024). Clarifying the costs for the EU's AI Act. EU policy-research analysis of compliance overhead, with regulatory cost at 17 per cent of total AI spending for affected systems. Source.

Frequently asked questions

What categories should be on an SME AI risk register?

Six to eight: data leakage, hallucination and misinformation, bias and discrimination, intellectual property and copyright, vendor lock-in, regulatory non-compliance, reputational, and security (prompt injection, model misuse). Each row gets a one-line risk description, likelihood, impact, mitigation, owner, review date. The whole register fits on a single page or one spreadsheet tab.

Who owns the AI risk register at SME scale?

The MD owns overall AI governance and signs the register. The operations lead maintains it day-to-day, tracks incidents, and surfaces issues for the monthly check-in. If the firm has a designated compliance or IT lead, that person can fill the operations-lead role. The key point is named ownership: someone updates the register, someone reviews it on cadence.

How often should the register be reviewed?

Three cadences. Monthly informal: 15-30 minutes, surface any incidents or new tools, update any high-likelihood high-impact rows. Quarterly formal: 30-45 minutes, MD plus operations lead, walk the full register top to bottom. Annual: full audit, the policy and register reassessed against the year's actual AI use.

What does an escalation trigger look like in the register?

A specific decision rule. Example: if three incidents occur on the same AI tool within a quarter, the tool is paused pending review. Example: if a vendor announces a security breach, the firm audits its exposure within 48 hours. Clear triggers prevent ad-hoc decision-making and reduce the chance a problem festers between formal reviews.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation