The owner of a twelve-person firm pulled me aside at the end of a workshop last month with a question she had been carrying. Her team had quietly gone from three people using ChatGPT once a week to a dozen people using it daily, across sales drafts, recruitment notes, supplier emails, financial commentary, and the occasional bit of client work. Nobody had decided anything. There was no rule about what was OK to paste and what was not. She knew, in the way owners know, that something was probably going wrong somewhere in there. She did not know what to do about it on Monday morning.
The answer that worked for her firm is the same one that works for many owner-led businesses at this scale. Not a policy document. Not an enterprise governance framework. A one-page written list of categories of information that nobody on the team should paste into a general AI tool, ever. Specific, named, signed off, ninety minutes to draft, pinned where the team actually works. It is the highest-yield AI governance artefact a small business can produce, and a lot of them have not produced one.
What is a never-paste-this list?
A never-paste-this list is a one-page written document naming the categories of information that nobody on the team is allowed to enter into a general AI tool, ever. Specific named categories, not abstract principles. Customer personal data. Client-confidential strategy. Financial figures before publication. Employee HR records. Login credentials of any kind. Each category sits with a one-line reason and a worked example of the safe alternative.
The list lives in the employee handbook, the onboarding pack, and somewhere the team sees it at the moment they are about to use an AI tool. It is signed off by the owner or a named senior person, reviewed quarterly, and grown as new tools and use cases appear. The discipline is in the specificity. “Don’t paste confidential stuff” is not a list. “Customer first and last names, customer email addresses, customer dates of birth, customer reference numbers” is.
Why does it matter for your business?
It matters because the alternative is what many SMEs have now: staff making case-by-case judgements under time pressure, with no shared reference to fall back on. Cyberhaven’s 2026 AI Adoption and Risk Report, based on real data flows across 222 companies, found that 39.7% of AI interactions involve sensitive data, and more than half of all Claude usage happens through personal accounts that bypass company controls and central logging.
The Samsung incident in April 2023 is the headline version of where this leads. Three separate exposures of confidential semiconductor code within twenty days of allowing staff ChatGPT use, with no written guidance on what could be pasted. The everyday SME version is quieter and more frequent. A customer email gets pasted to draft a reply. A draft contract gets pasted to summarise the terms. None of it shows up in any log the firm can see.
The other reason it matters is the accountability principle. UK GDPR Article 5(2) requires that you be able to demonstrate compliance, and the ICO is explicit that accountability is not a box-ticking exercise. If the regulator asks what measures you had in place to prevent customer data being pasted into a third-party AI tool, “we told the team verbally” is not a defensible answer. A written, signed, dated list is.
What belongs on the list?
Five categories belong on every SME version regardless of sector. Customer personal data, meaning names, email addresses, phone numbers, addresses, dates of birth, payment information, and reference numbers linked to a named individual. Client-confidential strategy, meaning contracts, pricing, supplier negotiations, M&A activity, and anything you are contractually required to keep confidential. Financial figures before publication, meaning revenue forecasts, management accounts, payroll figures, banking details, supplier payment terms, and investment round details.
The fourth is employee personal information, anything that ties a named employee to performance, disciplinary, health, or diversity data. The fifth is login credentials of any kind, including passwords, API keys, authentication tokens, security question answers, and MFA codes. For each category, the written list carries a one-line reason and a worked example of the safe alternative. Customer personal data is the commonest attack vector for identity theft, so the alternative is to anonymise before pasting, “Customer A asked about feature X” rather than the named version with email and date of birth attached. Login credentials never go near an AI tool because an API key in a chat log is functionally a published key.
Underneath the five universal categories sits the sector-specific section. If you hold NHS patient data, the DSP Toolkit applies and patient information must never be pasted into public AI tools. If you serve FCA-regulated clients, FCA-classified information has redistribution rules of its own. If you are a law firm, pasting privileged client communications into a public AI platform waives privilege and may breach your professional conduct rules. The four-tier data classification model is the conceptual underpinning, and the data-flow piece covers what actually happens when the paste button gets pressed.
Where do you put the list so people actually see it?
You put it in three or four places, and at least one of them has to interrupt the moment of risk. The employee handbook is the official home, a numbered section under data protection and AI tool use, acknowledged by new hires during onboarding. The onboarding pack carries a one-page summary, read before day one. The team-facing version is a pinned Slack post or a wiki page in Notion with version history.
The placement that does the heavy lifting is the moment-of-risk interrupt. If you deploy a basic Data Loss Prevention tool such as Microsoft Purview, you can configure a browser pop-up that fires when staff visit known public AI tools. The pop-up reminds them of the list and asks them to acknowledge before continuing. That single piece of friction at the moment of decision changes behaviour more than any number of training videos, because it lands in the second when the paste is about to happen rather than three weeks earlier in a meeting room. For a firm that cannot deploy DLP yet, a laminated card next to each desk and a fortnightly calendar reminder in the team meeting carries some of the same work.
What changes when the list grows over time?
The list grows because the tools the firm uses change, the use cases evolve, and the regulatory landscape moves. A quarterly thirty-minute review is the proportionate cadence for a 5 to 50 person firm. Three questions in that meeting: what new AI tools came into the firm, what near-misses have we noticed, and what regulatory or sector updates change anything in the existing list. Write down the answers and date the update.
A worked picture across four quarters. The Q1 list has the five universal categories and one sector addition. In Q2, someone asks whether ChatGPT can draft a prospect proposal that includes the prospect’s name, you clarify that prospect contact details count as customer personal data, and the list gets a worked example. In Q3, the firm adopts a new code-completion tool whose vendor cannot provide a Data Processing Agreement, so you add a category for source code and a rule about checking DPA availability before any new tool. In Q4, the operations lead wants to analyse recruitment patterns with AI, so you tighten the employee personal information section with explicit guidance on workforce demographic data. By Q1 the following year the list has grown from five categories to seven, and the firm has four documented quarterly reviews evidencing ongoing accountability.
This post sits inside a wider AI risk and governance cluster for SMEs. For the conceptual underpinning, read the four-tier data classification model. For why blanket bans on AI tools fail, see why default-ban on AI tools backfires. For the minimum policy the never-paste list operationalises, see the minimum viable AI policy for a small business. The never-paste list is the operational sharp end of governance at SME scale, cheap to produce, fast to circulate, and visible at the moment it has to work.
If your firm is where my workshop participant was, with AI use spreading faster than the rules to govern it, and you want to talk through what the proportionate version looks like for your firm, book a conversation.



