The two-page AI policy that fits a 10-person business

An operations lead at a desk marking up a printed template with a red pen, a laptop open beside her with a blank document
TL;DR

A working AI policy for a 10-person business has seven sections (scope, business purpose, responsibility, allowed and forbidden uses, data handling, human oversight, incident reporting), fits on 2-3 pages, gets signed by the MD, and is reviewed annually. The most common SME mistakes are copying enterprise templates the firm cannot execute, writing default-ban policies employees ignore, and writing only restrictions without practical guidance.

Key takeaways

- The seven sections every SME AI policy needs: scope, business purpose, responsibility, allowed and forbidden uses, data handling, human oversight, incident reporting. - Total length: 2-3 pages. If it exceeds 5 pages, it is either too prescriptive or duplicating other policies. - Two named owners: MD signs and approves. Operations lead maintains the register and reviews monthly. No committee. - The data handling rule that drives everything else: personal and confidential data only goes into tools with a Data Processing Agreement, training-disabled options, and UK GDPR compliance. In practice this is the paid commercial tier of major LLMs. - Three failure modes to avoid: enterprise-template copy (looks professional, references roles the firm does not have); default-ban policy (gets quietly circumvented by shadow AI); restrictions-only policy (no practical guidance on what good use looks like).

The owner of a 22-person consulting firm forwards a newly-bought 14-page AI policy template to his operations lead. “Can you adapt this for us?” The operations lead opens the file. Page 3 references an “AI Ethics Committee”. Page 5 references a “Chief Risk Officer”. Page 7 references an “Annual Independent Audit Programme”. The firm has none of these. The operations lead writes back. “I do not think this template is built for us.” The owner replies. “Then what should we have?”

The honest answer is shorter than the bought template. Two to three pages. Seven sections. One MD to sign it, one operations lead to maintain it, an annual review. That is the document the firm needs. The sections of the bought template that do not map to the firm’s structure should be left out entirely. Faking them produces a worse document than omitting them.

What are the seven sections that need to be on the page?

Scope, business purpose, responsibility, allowed and forbidden uses, data handling, human oversight, incident reporting. Each gets one or two paragraphs. The whole document is 2-3 pages, signed off by the MD and reviewed annually. If the policy runs to 5 pages, it is either over-prescriptive for the firm to execute or duplicating guidance that already lives in the firm’s data protection policy or employee handbook.

The seven sections are the same regardless of sector; the specifics inside each section change with the regulators that apply to the firm.

What does scope actually look like?

A simple statement: this policy applies to the use of AI systems and tools by all employees, contractors, and agents of the company in business activities. AI systems include large language models such as ChatGPT, generative AI image tools, and machine learning models embedded in third-party software. Broad enough to cover what is actually in use. Narrow enough to exclude physical systems like manufacturing equipment that fall under different governance.

Right after scope comes business purpose. Two sentences. The company is adopting AI to improve productivity, reduce costs, and improve decision-making. All AI use must comply with data protection law, sector-specific regulations, and this policy.

Who owns the policy?

Two named people. The MD approves new tools and signs the policy. The operations lead maintains the AI tool register, runs the monthly check-in, and triages any incidents. If the firm has a designated compliance or IT lead, they can fill the operations-lead role; most 10-50 person firms do not have one.

The “all employees are responsible for using approved tools only and reporting misuse to the operations lead immediately” line goes in too. Three sentences total covers responsibility. Anything more starts to look like an enterprise document.

How specific should the allowed and forbidden uses be?

Specific enough that an employee with a document open and an AI tool tab open can answer the question for themselves. Vague allowed-use guidance is the reason employees end up improvising the rules and getting it wrong. The allowed list and the forbidden list together do most of the policy’s work, so each example needs to be concrete enough to be a recognisable situation. Two paragraphs, four or five examples each, is the right density.

Allowed examples: drafting and summarising routine content with human review before external use; brainstorming and idea generation with no personal or confidential data input; code review and software testing with qualified developer review; routine customer service via a clearly-labelled AI chatbot with a path to a human.

Forbidden examples: feeding client confidential data into free public AI tools; using AI as the sole basis for hiring, lending, or eligibility decisions about people; AI-generated medical advice without practitioner sign-off; undisclosed AI-generated content in contexts where disclosure is required (EU customer-facing chatbots, marketing imagery in regulated markets).

The allowed list and the forbidden list together do most of the policy’s work. Employees who read these two sections should be able to make 90 percent of their day-to-day decisions without escalating.

What is the data handling rule the policy hinges on?

Personal data and confidential information only go into tools that have a Data Processing Agreement, have committed contractually not to use data for training, and are UK GDPR compliant. In practice that means the paid commercial tier of major LLMs (ChatGPT Enterprise, Claude for Work, Gemini Business, Microsoft Copilot for Microsoft 365), not the free tier.

This single rule prevents most of the data-leakage exposure SMEs face from AI use. The Samsung 2023 incident, where employees used free ChatGPT to handle confidential semiconductor design work, illustrates exactly the failure mode this rule blocks. The fix is paid commercial subscriptions for the people who actually need them, typically £15-30 per employee per month.

A separate data classification reference (Public, Internal, Confidential, Restricted) tells employees which data tier is allowed in which tool. The classification table lives alongside the policy as a one-page reference.

What does human oversight require?

Material decisions about people (hiring, lending, customer suitability, clinical recommendations) cannot be made solely on the basis of AI analysis. Human review and judgment apply. UK GDPR Article 22 makes this a legal obligation, with narrow exceptions that still require human safeguards. For routine AI output that affects external parties, the rule is review-before-external-use: someone with subject-matter expertise reads the material and signs off before it leaves the firm.

The reviewer takes responsibility for the output. The AI tool’s role is decision support; the named human is the decision maker. This separation matters most when the AI produces something that looks plausible and is wrong (a hallucinated case citation, an outdated tax rate, a wrong customer name).

The incident-reporting paragraph closes the document. Anyone who notices a leak, a wrong AI output that left the building, or a policy breach reports to the operations lead immediately. Operations lead investigates and escalates material incidents to the MD within 24 hours. Incident log is maintained and reviewed monthly.

What are the three failure modes to avoid?

Three policy failures account for most of what goes wrong at SME scale: the enterprise-template copy, the default-ban policy, and the restrictions-only policy. Each looks professional from the outside, each creates a different kind of exposure for the firm. The pattern across all three is that the policy describes a governance posture the firm cannot actually execute, which is worse than no policy because it creates the false sense that governance is in place.

The enterprise-template copy. The firm downloads a 30-page playbook from a Big Four website, swaps “AI Ethics Committee” for “leadership team”, signs it, files it. The references to roles the firm does not have stay on the page. When something goes wrong, the gap between the document and the firm shows immediately.

The default-ban policy. The firm tells employees not to use AI tools without approval and provides no approved tools. Employees use AI anyway because the work demands it. Leadership has no visibility. Shadow AI accumulates and the policy becomes the camouflage that hides it.

The restrictions-only policy. The firm tells employees what they cannot do and gives no guidance on what good use looks like. Employees who could be using AI productively for drafting or summarising improvise the rules themselves. The policy creates fear and confusion where it should create clear capability.

What is the right length, ownership, and review cadence?

Two to three pages of policy, signed by the MD, maintained by the operations lead, reviewed annually with a quarterly informal check-in to surface emerging issues. Acknowledged by all employees on adoption and on any material update. The policy lives alongside a one-page data classification reference (Public, Internal, Confidential, Restricted) and a one-page risk register. Three documents, total length under 5 pages, all maintained by the same two people.

The policy is a working document, not a compliance artefact. If it is signed once and never opened, it has not done its job. The quarterly check-in is what keeps it alive: 15 minutes, the team surfaces any new tools or incidents, the operations lead notes anything that needs follow-up. Annual review is the deeper pass where the policy itself is reassessed against the firm’s actual AI use.

If you have a bought template that does not fit the firm you actually run, and you want to talk about what governance looks like at your specific scale, book a conversation.

Sources

  • ICO AI risk toolkit and guidance on AI and personal data. Source.
  • UK Government Algorithmic Transparency Recording Standard. Source.
  • Institute of Chartered Accountants England and Wales AI guidance. Source.
  • Solicitors Regulation Authority guidance on AI in legal practice. Source.
  • ICO contracts and Data Processing Agreements. Source.
  • National Institute of Standards and Technology (2023). AI Risk Management Framework (AI RMF 1.0). Establishes measurement rigour and uncertainty quantification as core governance practice. Source.
  • National Association of Corporate Directors (2025). AI Friend and Foe, Director's Handbook on AI Oversight. Foundational governance principles for board-level AI oversight, transparency, risk frameworks and stakeholder communication. Source.
  • Chartered Governance Institute UK (2024). Artificial Intelligence and the Governance Professional. UK governance perspective on lawful, ethical and responsible AI use embedded within risk management frameworks. Source.

Frequently asked questions

How long should an SME AI policy actually be?

Two to three pages. If it exceeds five pages, it is either too prescriptive for the firm to execute or it is duplicating guidance that already exists in the firm's data protection policy. The seven required sections each need one or two paragraphs, not a chapter. Reviewed annually, signed by the MD, acknowledged by all employees on adoption.

Who should own the policy at SME scale?

Two people. The MD signs it, approves new tools, and makes the formal decisions. The operations lead maintains the AI tool register, runs the monthly check-in, and triages incidents. No committee, no board pack. If the firm has a designated compliance or IT lead, that person can fill the operations-lead role. Most 10-50 person firms do not have one.

What should I copy from enterprise AI policy templates?

The conceptual structure (scope, allowed/forbidden, data handling, oversight, incident reporting) is portable. The specific role references (Chief Risk Officer, AI Ethics Committee, Model Governance Board) are not. If you copy an enterprise template wholesale and change committee names to leadership team, you produce a policy the firm cannot execute. Use the structure, write the content for your firm's actual size.

What is the single most important paragraph in the policy?

The data handling clause. It says: personal data and confidential information only go into tools with a Data Processing Agreement, training-disabled options, and UK GDPR compliance. In practice that means paid commercial LLM tiers, not free ones. This single rule prevents most of the data-leakage exposure SMEs face from AI use.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation