The MD of a 30-person agency on a Monday morning. The agency adopted a “no AI tools without approval” policy three months ago. Today, in a casual hallway conversation, a project manager mentions she used ChatGPT to draft a tricky client email last week. “It’s fine, I didn’t put any client info in.” The MD pauses. The policy says no AI without approval. The project manager hadn’t asked. The MD now has to decide whether the policy was followed, broken, or simply irrelevant.
The conversation tells the MD something the policy hadn’t. The policy described a posture the firm could not actually enforce, and the PM was operating in the gap where the policy had failed to give her practical guidance. She was solving a work problem with the tool that was nearest to hand.
Why does default-ban feel like the right answer?
The default-ban instinct is sensible at first glance. The MD has read enterprise security guidance, has heard about the Samsung ChatGPT leak, and concludes that the safe posture is to disallow what has not been vetted. At a Fortune 500 with a procurement team, an IT department, and a CISO, this works because there are people whose job is to enforce the rule. The framework was designed for that context.
At 30 staff with no IT lead, the same posture has no enforcement layer. The policy is a sentence in an employee handbook that nobody returns to. Employees who run into a work problem an AI tool would solve do not have an approved alternative, do not have a process to ask for one, and do not have time to wait for an answer. They use the tool.
What does the data on shadow AI actually say?
Microsoft’s Work Trend Index, Salesforce’s Generative AI Snapshot, ISACA’s State of Cybersecurity research, Cyberhaven’s user behaviour data, and Gartner’s surveys of IT and business leaders all reach the same finding: employees in knowledge-work roles adopt AI tools regardless of organisational policy when those tools improve their output. SMEs typically have higher rates of unmanaged AI adoption than enterprises, because SME policy enforcement is lighter and the work demands often more immediate.
The pattern is not malicious. The PM in the hallway was not trying to circumvent the firm’s policy. She was solving a work problem with the tool that was available, the same way she might have asked a colleague for a draft if the tool had not existed. The policy did not give her a usable alternative.
Where does default-ban actually backfire?
Three failures show up at SME scale. Employees using AI under a ban do not tell anyone, so leadership has no visibility into which tools the firm’s data is going into. The ban becomes the reason employees feel they cannot ask for an approved tool, which kills productive adoption. When something eventually leaks, the firm holds an unenforced compliance document, which is worse than no policy because it created a false sense that governance was in place.
The Samsung 2023 leak, often cited as a cautionary tale, illustrates this exact failure mode at enterprise scale. The fix Samsung implemented after the incident was a paid commercial tier with data privacy commitments plus training. The remedy that worked was an enforceable allowed-use position with the right tools provided, rather than a tighter version of the ban that had failed.
What does default-allow with guardrails look like?
The starting position is that employees may use AI tools for the categories named in the policy, with the data restrictions named in the policy. The categories typically include drafting, summarising, brainstorming, code review, and routine customer service via clearly-labelled chatbots. Each category carries a named guardrail (review before external use, no personal data input, qualified-developer review of code, etc.).
A small number of clearly-named forbidden categories sit alongside the allowed list. Feeding client confidential data into free public AI tools is the canonical example. AI as the sole basis for hiring, lending, or eligibility decisions about people is another. Undisclosed AI-generated content in contexts where disclosure is legally required is a third. Three or four forbidden categories cover most of the regulatory and reputational exposure SMEs face.
The trade is real. Default-allow requires more active management than default-ban. The MD has to actually look at what AI is being used, intervene where risks emerge, and update the policy as new tools enter the firm. The trade is leadership visibility for compliance theatre. At SME scale, visibility is the higher-value position because it lets the firm govern what it can see.
How do you surface the shadow AI you already have?
The amnesty + survey pattern is documented across multiple SME case studies. The MD circulates a confidential email or short form. Two weeks. The message is explicit: no consequences for past use, the firm is developing an official AI policy and wants to understand current adoption to inform it. Employees are asked which AI tools they use, for what purposes, and what value they get from them.
The results typically cluster around four use cases: drafting, summarising, code assistance, and brainstorming. The data informs the official policy. The firm decides which tools to officially approve and pay for, often arriving at a paid commercial tier of ChatGPT or Claude for the people who use it most. The mechanic only works if the no-consequences commitment is genuine. If employees suspect the survey is a trap, they will withhold information, and the policy will be built on the wrong picture of what is actually happening.
Where is default-ban genuinely the right posture?
For specific data categories in regulated sectors, default-ban makes sense as the rule for that category. SRA-regulated firms should default-ban free public AI tools for client matter data, while default-allowing AI for general business tasks. Healthcare practices default-ban external AI for patient records under UK GDPR Article 9. FCA-regulated firms default-ban AI as the sole basis for suitability or credit decisions.
These are targeted bans inside a default-allow framework. The firm-wide posture stays default-allow. The forbidden categories are named, specific, and tied to the data class or decision type that triggers the regulatory rule. Each sector overlay adds one or two categories to the firm’s general allowed/forbidden list, rather than replacing the framework wholesale.
If the firm you run has a default-ban policy that is being quietly worked around, and you want to talk about how to surface what’s already happening and build governance that fits, book a conversation.



