The owner of a fourteen-person consultancy has been meaning to write an AI policy for nine months. She has bookmarked four templates from professional bodies. One is twenty-six pages. Two assume the firm has a Head of Information Security. One opens with a section on the firm’s AI Ethics Committee. None of them are anywhere near what her firm needs, and she knows it. So the bookmarks sit there, the team carries on using ChatGPT and Claude every day, and the policy folder stays empty.
This is where many owner-led SMEs land. The published AI policy material is written for organisations with HR teams, IT functions and legal counsel. The version a 5 to 50 person business actually needs is two pages, plain language, five sections, drafted in a single ninety-minute workshop. Shorter than the typical owner expects when they go looking for templates, and longer than the typical SME currently has written down.
What is a minimum viable AI policy?
A minimum viable AI policy is the shortest written document that still does the job of policy. For a 5 to 50 person business that comes out at two pages, five sections, plain language, signed by the owner. It says what is permitted, what data must never go into AI tools, when use is disclosed to clients, who owns the document, and what happens when something goes wrong.
Anything beyond those five questions belongs in operational practice, not in the policy. The job of the document is to be short enough for the team to actually read, clear enough to be defensible to the ICO and to an enterprise client procurement contact, and stable enough that it does not need rewriting every quarter. Below that threshold there is no governance. Above it there is paperwork no one opens.
Why does it matter for your business?
UK GDPR applies regardless of headcount, the Information Commissioner’s Office expects written accountability for how personal data is processed, and enterprise clients now ask for the document in procurement. Without one, every employee makes private judgements about whether a piece of client data is safe to paste into a chatbot, the answer to a client question is improvised, and the regulator’s view of the firm rests on whatever the owner remembers on the day.
A written policy fixes the worst of that for a small fraction of the time enterprise templates demand. It also surfaces the conversations the team has been quietly avoiding, about disclosure to clients, about which tier of which tool is in use, and about who decides when a new AI tool gets adopted. Those conversations are usually short. The relief of having had them is large.
Where will you actually meet it?
You will meet the policy in three places. When a client asks for it in a supplier questionnaire or contract amendment, which is now a routine ask in enterprise procurement. When a staff member asks whether they can use a new AI tool for client work and there is no agreed answer. When something goes wrong and the question becomes what the firm’s stated position was.
The five sections cover acceptable use (which tools, which tasks), data handling (which categories of data can and cannot go into AI tools), client disclosure (when and how the client is told), named owner and review cadence, and escalation when something goes wrong. Tool-specific instructions, prompt libraries and review checklists sit outside the policy in operational practice, where they change without needing a policy revision. That split is the discipline that keeps the policy short and the team reading it.
The data handling section earns more of its keep than the other four put together. Three data categories usually suffice. Public or firm-created data can go into commercial AI tools without restriction. Client data with written client authorisation can go in. Restricted data, which is the catch-all for personal data, financial information, IP that does not belong to the firm and anything covered by an NDA, never goes in, with no employee discretion. That single rule prevents most of the data-leakage exposure an SME faces, and it is the rule the ICO will check first.
When to ask vs when to ignore
Ask “is this a policy question or an operational question?” every time something new lands. New tool adoption is operational if the tool fits within the existing acceptable use and data handling categories. It becomes a policy question only when it does not fit, for example the team starts using AI for video editing when the policy only covers text and image. A new prompt template, a review checklist, a tool configuration guide, all operational.
Ignore the urge to put tool names and prompt examples in the policy. They date quickly, they make the document longer than the team will read, and they pull the owner into approving operational changes that should not need her attention. Ignore the urge to publish the policy as a downloadable template for clients. The value is in your firm having one, not in productising it. And ignore enterprise frameworks that try to add an AI Ethics Committee, a Model Governance Board or a Chief AI Officer to a 14-person firm. The roles do not exist and pretending they do produces a policy the firm cannot execute.
The drafting process worth running is one ninety-minute workshop with the owner and one operations lead, plus a second pass after staff input on the draft. The workshop walks through each of the five sections by asking direct questions. What tools is the team currently using and which would you stop. What is the most sensitive data in the business and how would you feel if it landed in a third-party AI tool. Do your clients know AI is being used in their work and should they. Who in the firm owns this document and when do you review it. What is the protocol when something goes wrong. Ninety minutes of those questions, with someone taking notes, is enough to draft the policy. The owner and operations lead then redline once, the team gets one round of input, and the document is signed.
Related concepts
Three close cousins sit alongside the policy. The proportionate AI risk register is a one-page spreadsheet listing the firm’s actual AI use cases with risk, likelihood, impact and named owner per row. The audit trail an SME actually needs is the incident log plus the quarterly review note. The conversation with staff about AI use is what turns the policy from a signed document into actual behaviour on the ground.
What ties the four together is the same discipline. The policy stays short, the register stays live, the log stays honest, the conversation stays open. A 5 to 50 person firm that does those four things has more practical AI governance than many enterprise frameworks deliver on paper, and the owner has her evenings back.
If you want a peer to sense-check what you have written, or to facilitate the ninety-minute workshop with you and your operations lead, book a conversation.



