The owner of a 22-person consulting firm forwards a newly-bought 14-page AI policy template to his operations lead. “Can you adapt this for us?” The operations lead opens the file. Page 3 references an “AI Ethics Committee”. Page 5 references a “Chief Risk Officer”. Page 7 references an “Annual Independent Audit Programme”. The firm has none of these. The operations lead writes back. “I do not think this template is built for us.” The owner replies. “Then what should we have?”
The honest answer is shorter than the bought template. Two to three pages. Seven sections. One MD to sign it, one operations lead to maintain it, an annual review. That is the document the firm needs. The sections of the bought template that do not map to the firm’s structure should be left out entirely. Faking them produces a worse document than omitting them.
What are the seven sections that need to be on the page?
Scope, business purpose, responsibility, allowed and forbidden uses, data handling, human oversight, incident reporting. Each gets one or two paragraphs. The whole document is 2-3 pages, signed off by the MD and reviewed annually. If the policy runs to 5 pages, it is either over-prescriptive for the firm to execute or duplicating guidance that already lives in the firm’s data protection policy or employee handbook.
The seven sections are the same regardless of sector; the specifics inside each section change with the regulators that apply to the firm.
What does scope actually look like?
A simple statement: this policy applies to the use of AI systems and tools by all employees, contractors, and agents of the company in business activities. AI systems include large language models such as ChatGPT, generative AI image tools, and machine learning models embedded in third-party software. Broad enough to cover what is actually in use. Narrow enough to exclude physical systems like manufacturing equipment that fall under different governance.
Right after scope comes business purpose. Two sentences. The company is adopting AI to improve productivity, reduce costs, and improve decision-making. All AI use must comply with data protection law, sector-specific regulations, and this policy.
Who owns the policy?
Two named people. The MD approves new tools and signs the policy. The operations lead maintains the AI tool register, runs the monthly check-in, and triages any incidents. If the firm has a designated compliance or IT lead, they can fill the operations-lead role; most 10-50 person firms do not have one.
The “all employees are responsible for using approved tools only and reporting misuse to the operations lead immediately” line goes in too. Three sentences total covers responsibility. Anything more starts to look like an enterprise document.
How specific should the allowed and forbidden uses be?
Specific enough that an employee with a document open and an AI tool tab open can answer the question for themselves. Vague allowed-use guidance is the reason employees end up improvising the rules and getting it wrong. The allowed list and the forbidden list together do most of the policy’s work, so each example needs to be concrete enough to be a recognisable situation. Two paragraphs, four or five examples each, is the right density.
Allowed examples: drafting and summarising routine content with human review before external use; brainstorming and idea generation with no personal or confidential data input; code review and software testing with qualified developer review; routine customer service via a clearly-labelled AI chatbot with a path to a human.
Forbidden examples: feeding client confidential data into free public AI tools; using AI as the sole basis for hiring, lending, or eligibility decisions about people; AI-generated medical advice without practitioner sign-off; undisclosed AI-generated content in contexts where disclosure is required (EU customer-facing chatbots, marketing imagery in regulated markets).
The allowed list and the forbidden list together do most of the policy’s work. Employees who read these two sections should be able to make 90 percent of their day-to-day decisions without escalating.
What is the data handling rule the policy hinges on?
Personal data and confidential information only go into tools that have a Data Processing Agreement, have committed contractually not to use data for training, and are UK GDPR compliant. In practice that means the paid commercial tier of major LLMs (ChatGPT Enterprise, Claude for Work, Gemini Business, Microsoft Copilot for Microsoft 365), not the free tier.
This single rule prevents most of the data-leakage exposure SMEs face from AI use. The Samsung 2023 incident, where employees used free ChatGPT to handle confidential semiconductor design work, illustrates exactly the failure mode this rule blocks. The fix is paid commercial subscriptions for the people who actually need them, typically £15-30 per employee per month.
A separate data classification reference (Public, Internal, Confidential, Restricted) tells employees which data tier is allowed in which tool. The classification table lives alongside the policy as a one-page reference.
What does human oversight require?
Material decisions about people (hiring, lending, customer suitability, clinical recommendations) cannot be made solely on the basis of AI analysis. Human review and judgment apply. UK GDPR Article 22 makes this a legal obligation, with narrow exceptions that still require human safeguards. For routine AI output that affects external parties, the rule is review-before-external-use: someone with subject-matter expertise reads the material and signs off before it leaves the firm.
The reviewer takes responsibility for the output. The AI tool’s role is decision support; the named human is the decision maker. This separation matters most when the AI produces something that looks plausible and is wrong (a hallucinated case citation, an outdated tax rate, a wrong customer name).
The incident-reporting paragraph closes the document. Anyone who notices a leak, a wrong AI output that left the building, or a policy breach reports to the operations lead immediately. Operations lead investigates and escalates material incidents to the MD within 24 hours. Incident log is maintained and reviewed monthly.
What are the three failure modes to avoid?
Three policy failures account for most of what goes wrong at SME scale: the enterprise-template copy, the default-ban policy, and the restrictions-only policy. Each looks professional from the outside, each creates a different kind of exposure for the firm. The pattern across all three is that the policy describes a governance posture the firm cannot actually execute, which is worse than no policy because it creates the false sense that governance is in place.
The enterprise-template copy. The firm downloads a 30-page playbook from a Big Four website, swaps “AI Ethics Committee” for “leadership team”, signs it, files it. The references to roles the firm does not have stay on the page. When something goes wrong, the gap between the document and the firm shows immediately.
The default-ban policy. The firm tells employees not to use AI tools without approval and provides no approved tools. Employees use AI anyway because the work demands it. Leadership has no visibility. Shadow AI accumulates and the policy becomes the camouflage that hides it.
The restrictions-only policy. The firm tells employees what they cannot do and gives no guidance on what good use looks like. Employees who could be using AI productively for drafting or summarising improvise the rules themselves. The policy creates fear and confusion where it should create clear capability.
What is the right length, ownership, and review cadence?
Two to three pages of policy, signed by the MD, maintained by the operations lead, reviewed annually with a quarterly informal check-in to surface emerging issues. Acknowledged by all employees on adoption and on any material update. The policy lives alongside a one-page data classification reference (Public, Internal, Confidential, Restricted) and a one-page risk register. Three documents, total length under 5 pages, all maintained by the same two people.
The policy is a working document, not a compliance artefact. If it is signed once and never opened, it has not done its job. The quarterly check-in is what keeps it alive: 15 minutes, the team surfaces any new tools or incidents, the operations lead notes anything that needs follow-up. Annual review is the deeper pass where the policy itself is reassessed against the firm’s actual AI use.
If you have a bought template that does not fit the firm you actually run, and you want to talk about what governance looks like at your specific scale, book a conversation.



