The MD of a 14-person professional services firm has just downloaded a 71-page AI governance playbook from one of the Big Four. She has read the first 10 pages. The playbook references an “AI ethics committee”, a “model governance board”, a “Chief Risk Officer”, and a “Data Steward”. She has none of these. The firm has her, the operations lead, and 12 other people who do client work. She closes the PDF and opens her email. The governance question does not go away.
This is the position most owner-led SMEs are sitting in right now. The published frameworks are credible and serious. They were also written for organisations 100 to 1,000 times the size of the firm reading them. The reading owner ends up in one of two places: convinced governance is impossible at this scale, or copying enterprise templates the firm cannot execute. Both responses leave the firm exposed.
Why does enterprise AI governance fail at this size?
Enterprise governance frameworks presume infrastructure that the 5-50 person business does not have. NIST AI Risk Management Framework 1.0 prescribes four functions (Govern, Map, Measure, Manage) with role specialisation behind each. ISO/IEC 42001 describes a documented AI management system with auditing cadence. Big Four playbooks reference an AI Ethics Committee, a Model Governance Board, a Chief Risk Officer. Each of those roles assumes a person who exists at that company.
At a 14-person firm, the MD is usually all of those roles simultaneously. The operations lead is the ones not covered. There is no committee to convene. Forcing the firm’s reality into the enterprise role structure produces a policy that looks professional and is not aligned with what the business can execute. The first time something goes wrong, the gap shows.
The frameworks are right for the businesses they were written for. The structural error is reading them as a template for a business they were not written for.
What about sole-trader guidance instead?
Sole-trader guidance lives at the other end of the spectrum. It assumes one person, informal control, and use common sense as the operating model. That works for a one-person consultancy where the owner is also the data controller. It breaks at a 25-person firm with five paralegals on free ChatGPT, two solicitors on Claude, and a partner with no current view of what the team has adopted.
The 5-50 person business is caught in the middle. Too large for “use common sense” to count as governance. Too small for the enterprise infrastructure. The size class deserves its own playbook, and very few exist.
What is the actual shape that works?
One MD, one operations lead, a 2-3 page policy, a one-page risk register, a 30-minute monthly review, an annual audit. Six to eight risk categories on the register, six or seven sections on the policy, three calendar invites a year for the formal cadence. Total staff time across the year: 16-32 hours. Total cost in staff time: £800-2,000.
The MD owns governance and signs the policy. The operations lead maintains the register, tracks incidents, and surfaces issues for the monthly check-in. No committee, no board pack, no specialised AI governance software. The infrastructure is whatever the firm uses for its existing operations: shared spreadsheet, shared document folder, the team’s communication channel.
The framework underneath this is borrowed pragmatically from three sources. The data protection rules come from the ICO’s AI guidance and risk toolkit, which is written for a UK audience and grounded in actual legal obligations. The conceptual structure comes from NIST AI RMF’s four functions, used as a thinking-prompt rather than a documentation requirement. The security taxonomy comes from OWASP LLM Top 10 if the firm uses large language models, which by 2026 is most firms.
ISO/IEC 42001 certification is skipped. Certification costs £5,000-25,000 with annual recertification. No regulator mandates it. The selling point is reputational, mainly relevant for organisations whose customers require certified AI governance. For most owner-led SMEs, the cost-benefit is not there.
Why does copying enterprise templates make things worse?
Because it produces a policy that looks like the firm has governance when the firm cannot actually execute it. The “AI Ethics Committee” line in the policy never gets convened because the firm does not have an Ethics Committee. The “annual independent audit” line is not actioned because the firm does not engage external auditors. The policy gets signed and filed and never opened again.
When something goes wrong (a data leak, an AI hallucination that left the building, a regulatory query), the policy turns out to describe a governance function that does not exist. The exposure is worse than no policy because the firm is now in the position of having an unenforced compliance document on file.
The remedy is not less governance. The remedy is governance the firm can actually execute, written for the staff and structure that actually exist.
Where do you start?
Five questions and ten minutes with a notebook. Which regulators apply to us specifically. Who in this room owns AI governance. What AI tools are actually being used in the firm right now. Where is the personal data and where could it leak. What would we do if a tool leaked tomorrow. Those five answers, written down, are the start of usable governance.
The first answer is usually ICO at minimum, plus FCA, SRA, GMC, or ICAEW depending on sector. The second is almost always the MD, with the operations lead day-to-day. The third often surprises the MD: more tools than they thought, and free-tier ChatGPT in places it should not be. The fourth maps to customer records, employee files, and client matter information. The fifth question is the one most firms have not answered, and it is the one regulators ask first when something has gone wrong.
These five answers are the start of governance for a 14-person firm. They are the right starting point for this size class, calibrated to the staff and structure that actually exist.
If you are sitting with a downloaded enterprise framework that does not match the firm you actually run, and you want to talk about what governance looks like at your specific scale, book a conversation.



