The operations lead of a 28-person marketing agency is at her standing desk on a Tuesday morning. A senior strategist messages her. “Quick one, can I use ChatGPT to summarise the 60-page market research deck the new client just sent? It would save me three hours.” The deck is the client’s, marked confidential in the engagement letter, and contains supplier names and pricing that are not public. The operations lead types: “no”. The strategist replies: “OK. Why?” The operations lead realises she can give a one-line answer or a 10-minute answer, and the firm has no shared rule. Today she has to invent one.
The conversation is the canonical SME AI governance moment. An employee with a document open and a tool tab open. A live work problem. A 30-second decision needed. Without a shared rule, the answer becomes a personal judgment call by whoever is asked, which is the slowest and least defensible way to run AI governance. The fix is a four-tier classification scheme that gives the strategist a one-line answer next time, before he has to ask.
What are the four tiers?
Public data has no confidentiality or privacy restrictions: published marketing content, public research, customer-facing descriptions. Internal data is used inside the firm but not for external distribution: internal emails, meeting notes, strategic plans, financial forecasts. Confidential data is provided by clients or third parties under contractual confidentiality: client matter notes, customer personal data, supplier contracts. Restricted data is special category personal data under UK GDPR Article 9.
These four tiers cover the full range of data an SME handles, from public marketing copy at one end to patient health records or criminal-conviction data at the other.
Which tier of tool can process which tier of data?
The mapping is the rule. Public data goes anywhere, including free public LLMs. Internal data goes only to paid commercial tools with Data Processing Agreements and training-disabled options. Confidential data requires the third party’s explicit consent and a DPA, or stays on specialised sector tooling or on-premise systems. Restricted data typically does not leave the firm at all.
A single-page reference table operationalises this. Tool names down the side, the four data tiers across the top, a yes-or-no-or-conditional in each cell. The table sits in the firm’s shared drive alongside the AI policy. An example:
| Tool | Public | Internal | Confidential | Restricted |
|---|---|---|---|---|
| Free ChatGPT or Gemini | Yes | No | No | No |
| ChatGPT Plus or Claude Pro (paid) | Yes | Yes | No | No |
| ChatGPT Enterprise or Claude for Work | Yes | Yes | Conditional | No |
| On-premise / locally-run model | Yes | Yes | Conditional | Conditional |
| Sector vendor with DPA + client consent | Yes | Yes | Yes | No |
| Healthcare AI with HIPAA + patient consent | Yes | Yes | Yes | Conditional |
Conditional means the firm has done the specific work to make the cell a yes for that data class with that tool: client consent obtained, DPA signed, technical safeguards verified, and the use case approved by the MD.
Why does the Article 9 overlay matter?
UK GDPR Article 9 identifies special category personal data: health, biometric, genetic, racial or ethnic origin, political belief, religious belief, sexual orientation, trade union membership, and criminal conviction data. Article 9 requires a specific lawful basis for processing this data, on top of the general lawful basis under Article 6. The threshold is materially higher than for ordinary personal data.
For SME purposes, the rule is simple. Article 9 data does not go into external AI tools unless there is a specific cleared use case with explicit consent and exceptional safeguards. Healthcare practices that handle patient health data are the most common SME context where Article 9 governs every day. HR functions handling employee health records and diversity data are the second most common. Both contexts deserve their own paragraph in the firm’s AI policy.
How does this work for a legal firm?
A solicitor receives a client matter involving a complex contract dispute. The engagement letter permits the firm to use technology tools for efficiency. The matter includes hundreds of pages of emails, contracts, and meeting notes. All of it is Confidential data under the four-tier scheme. The solicitor wants to use AI to extract key contract terms.
The reference table tells the solicitor: free ChatGPT is forbidden. ChatGPT Plus is forbidden. ChatGPT Enterprise might be permitted if the firm has obtained explicit client consent or the engagement letter covers AI use, and the firm has signed a DPA with OpenAI Enterprise. A specialist legal-tech vendor with a DPA and built-in client confidentiality features would be the cleaner choice. The decision takes 30 seconds with the table, where it would have taken a phone call without it.
How does this work for a healthcare practice?
A GP practice is exploring an AI tool that drafts clinical notes from clinician dictation. Patient health data is Restricted under Article 9. The reference table tells the practice manager: external cloud AI tools are forbidden by default for this data class. The exceptions involve specialist healthcare AI products with HIPAA compliance (or UK GDPR equivalent), explicit patient consent, strong technical safeguards, and clinical sign-off on the deployment.
The practice manager pulls together the four-question due-diligence pass for the candidate vendor: medical device classification, clinical safety evidence, patient consent process, contractual terms. The decision takes a week of homework. The reference table is what tells the practice the homework needs doing.
How does this work for a marketing agency or e-commerce firm?
A marketing agency uses AI to generate social media graphics and copy for client campaigns. Client briefs and strategy documents are Confidential. The reference table tells the agency: paid commercial AI tools with DPAs are the floor. AI-generated graphics that go into client campaigns are labelled or disclosed if the campaign context requires it. Client confidential information stays out of free public tools.
An e-commerce business uses AI to personalise product recommendations from customer purchase history. Customer data is Confidential under the scheme but stays outside Article 9. The privacy notice has been updated to disclose AI processing, the lawful basis is documented (consent or legitimate interest), and the AI vendor has a DPA. The reference table is what tells the firm to complete these three steps before deployment rather than discovering the gap afterwards.
If the firm you run is currently answering “can I put this into ChatGPT?” with a 10-minute conversation each time, and you want to talk about getting that to a 30-second reference table, book a conversation.



