Your minimum viable AI policy as a small business

A founder and an operations lead at a meeting table with a printed two-page draft between them, two coffee mugs and a notebook, mid-conversation
TL;DR

A minimum viable AI policy for a 5 to 50 person business is two pages, five sections, drafted in a single ninety-minute workshop with the owner and one operations lead. The five sections are acceptable use, data handling, client work disclosure, named owner and review cadence, and escalation when something goes wrong. Tool-specific instructions, prompt libraries and review checklists stay out of the policy and live in operational practice instead.

Key takeaways

- The minimum viable AI policy has five sections, acceptable use, data handling, client disclosure, named owner and review cadence, and escalation. Two pages, principle-based, signed by the owner. - Tool-specific instructions, prompt libraries, review checklists and project carve-outs stay out of the policy and sit in operational practice, which can change without a policy revision. - The drafting process that actually finishes is ninety minutes with the owner and one operations lead, plus one round of staff input on the draft. - Annual review tied to a fixed calendar event, plus interim updates triggered by new tool adoption, regulatory change or an incident that revealed a gap. In a typical year the policy needs only minor tweaks. - The data handling section is where the SME version earns its keep. Restricted data (client personal data, regulated data, IP not belonging to the firm) never goes into commercial AI tools, no exceptions.

The owner of a fourteen-person consultancy has been meaning to write an AI policy for nine months. She has bookmarked four templates from professional bodies. One is twenty-six pages. Two assume the firm has a Head of Information Security. One opens with a section on the firm’s AI Ethics Committee. None of them are anywhere near what her firm needs, and she knows it. So the bookmarks sit there, the team carries on using ChatGPT and Claude every day, and the policy folder stays empty.

This is where many owner-led SMEs land. The published AI policy material is written for organisations with HR teams, IT functions and legal counsel. The version a 5 to 50 person business actually needs is two pages, plain language, five sections, drafted in a single ninety-minute workshop. Shorter than the typical owner expects when they go looking for templates, and longer than the typical SME currently has written down.

What is a minimum viable AI policy?

A minimum viable AI policy is the shortest written document that still does the job of policy. For a 5 to 50 person business that comes out at two pages, five sections, plain language, signed by the owner. It says what is permitted, what data must never go into AI tools, when use is disclosed to clients, who owns the document, and what happens when something goes wrong.

Anything beyond those five questions belongs in operational practice, not in the policy. The job of the document is to be short enough for the team to actually read, clear enough to be defensible to the ICO and to an enterprise client procurement contact, and stable enough that it does not need rewriting every quarter. Below that threshold there is no governance. Above it there is paperwork no one opens.

Why does it matter for your business?

UK GDPR applies regardless of headcount, the Information Commissioner’s Office expects written accountability for how personal data is processed, and enterprise clients now ask for the document in procurement. Without one, every employee makes private judgements about whether a piece of client data is safe to paste into a chatbot, the answer to a client question is improvised, and the regulator’s view of the firm rests on whatever the owner remembers on the day.

A written policy fixes the worst of that for a small fraction of the time enterprise templates demand. It also surfaces the conversations the team has been quietly avoiding, about disclosure to clients, about which tier of which tool is in use, and about who decides when a new AI tool gets adopted. Those conversations are usually short. The relief of having had them is large.

Where will you actually meet it?

You will meet the policy in three places. When a client asks for it in a supplier questionnaire or contract amendment, which is now a routine ask in enterprise procurement. When a staff member asks whether they can use a new AI tool for client work and there is no agreed answer. When something goes wrong and the question becomes what the firm’s stated position was.

The five sections cover acceptable use (which tools, which tasks), data handling (which categories of data can and cannot go into AI tools), client disclosure (when and how the client is told), named owner and review cadence, and escalation when something goes wrong. Tool-specific instructions, prompt libraries and review checklists sit outside the policy in operational practice, where they change without needing a policy revision. That split is the discipline that keeps the policy short and the team reading it.

The data handling section earns more of its keep than the other four put together. Three data categories usually suffice. Public or firm-created data can go into commercial AI tools without restriction. Client data with written client authorisation can go in. Restricted data, which is the catch-all for personal data, financial information, IP that does not belong to the firm and anything covered by an NDA, never goes in, with no employee discretion. That single rule prevents most of the data-leakage exposure an SME faces, and it is the rule the ICO will check first.

When to ask vs when to ignore

Ask “is this a policy question or an operational question?” every time something new lands. New tool adoption is operational if the tool fits within the existing acceptable use and data handling categories. It becomes a policy question only when it does not fit, for example the team starts using AI for video editing when the policy only covers text and image. A new prompt template, a review checklist, a tool configuration guide, all operational.

Ignore the urge to put tool names and prompt examples in the policy. They date quickly, they make the document longer than the team will read, and they pull the owner into approving operational changes that should not need her attention. Ignore the urge to publish the policy as a downloadable template for clients. The value is in your firm having one, not in productising it. And ignore enterprise frameworks that try to add an AI Ethics Committee, a Model Governance Board or a Chief AI Officer to a 14-person firm. The roles do not exist and pretending they do produces a policy the firm cannot execute.

The drafting process worth running is one ninety-minute workshop with the owner and one operations lead, plus a second pass after staff input on the draft. The workshop walks through each of the five sections by asking direct questions. What tools is the team currently using and which would you stop. What is the most sensitive data in the business and how would you feel if it landed in a third-party AI tool. Do your clients know AI is being used in their work and should they. Who in the firm owns this document and when do you review it. What is the protocol when something goes wrong. Ninety minutes of those questions, with someone taking notes, is enough to draft the policy. The owner and operations lead then redline once, the team gets one round of input, and the document is signed.

Three close cousins sit alongside the policy. The proportionate AI risk register is a one-page spreadsheet listing the firm’s actual AI use cases with risk, likelihood, impact and named owner per row. The audit trail an SME actually needs is the incident log plus the quarterly review note. The conversation with staff about AI use is what turns the policy from a signed document into actual behaviour on the ground.

What ties the four together is the same discipline. The policy stays short, the register stays live, the log stays honest, the conversation stays open. A 5 to 50 person firm that does those four things has more practical AI governance than many enterprise frameworks deliver on paper, and the owner has her evenings back.

If you want a peer to sense-check what you have written, or to facilitate the ninety-minute workshop with you and your operations lead, book a conversation.

Sources

- Information Commissioner's Office. Guidance on AI and data protection. The UK accountability framework SMEs operate under when AI processes personal data. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ - Information Commissioner's Office. How to use AI and personal data appropriately and lawfully. Practical ICO guide referenced by the policy data handling section. https://ico.org.uk/media2/migrated/4022261/how-to-use-ai-and-personal-data.pdf - Acas. AI at work guidance. The UK workplace authority on AI implementation, employee engagement and transparency that shapes the disclosure section. https://www.acas.org.uk/advice/ai-at-work - CIPD. Preparing your organisation for AI use. UK research and guidance on AI adoption in smaller organisations and where written policy currently lags actual tool use. https://www.cipd.org/en/knowledge/guides/preparing-organisation-AI-use/ - National Institute of Standards and Technology. AI Risk Management Framework 1.0 (2023). The four-function structure (govern, map, measure, manage) the SME policy compresses into five readable sections. https://www.nist.gov/itl/ai-risk-management-framework - ISO/IEC 42001:2023 Information technology, Artificial intelligence, Management system. International standard on integrating AI governance into existing management processes rather than running it as a separate function. https://www.iso.org/standard/81230.html - Lewis Silkin. Artificial intelligence and employment law guidance. UK employment-law specialist input on AI policy obligations for smaller employers. https://www.lewissilkin.com/insights/2024/05/artificial-intelligence-and-employment-law - UK Government. A pro-innovation approach to AI regulation. The UK sectoral framework that determines which regulator (ICO, FCA, SRA, etc.) shapes the policy's data handling and disclosure language. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach - International Association of Privacy Professionals. AI governance and risk management resource hub. Practical privacy framing referenced by the data handling section's restricted-data category. https://iapp.org/resources/article/ai-governance-and-risk-management/ - McKinsey. The state of AI report. Annual industry research documenting the gap between SME AI tool adoption and formal policy development. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai

Frequently asked questions

How is this different from the seven-section AI policy template you have already written about?

The earlier post named the seven sections that need to be on the page. This one sharpens the line between what belongs in the policy and what belongs in operational practice, and describes the ninety-minute drafting workshop that produces a policy the team will actually read. The seven-section structure still works. What changes here is the discipline of keeping the policy short by moving tool-specific detail out of it.

Do I really need a written AI policy if I am a 12-person firm?

Yes. UK GDPR applies regardless of headcount, the ICO expects accountability documentation, and enterprise clients now ask for it in procurement. Without a written policy, staff make individual judgements about client data and AI tools and you have no way of knowing what those judgements are. Two pages, signed by you, takes care of the obligation without creating a compliance team you do not have.

Can I just download a template and adapt it?

You can use a template as a checklist, not as a finished document. Generic templates assume an IT function, an HR department and a multi-level approval chain that a 5 to 50 person firm does not have. Adopting one wholesale produces a policy the team treats as imposed bureaucracy. The ninety-minute workshop process produces a shorter and more workable document because it reflects what your firm actually does.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation