Picture a forty-person professional services firm. The founder is at the Tuesday standup when one of the analysts mentions, casually, that ChatGPT has been doing the first draft of every client brief for the last six months. Three of them, on personal accounts. His first thought is to call legal. His better thought, after he sleeps on it, is the simpler one: why didn’t they use the enterprise tool the firm pays for?
The answer is short, and it lands harder than the breach risk. The official tool is gated behind a request form, sits two clicks deeper than the browser, and is materially worse at the actual task. The analysts didn’t break the rule because they’re insubordinate. They broke it because the rule made the work slower.
Why is shadow AI now the default?
Shadow AI stopped being an edge case some time around 2024. In 2023, sensitive data made up around eleven percent of what employees pasted into ChatGPT. By 2025 that number was thirty-five percent. The behaviour scaled fast. The governance didn’t. Most firms still write policy as if shadow AI is something a few engineers do at the margins, when the data shows it’s now what most knowledge workers do most days.
The categories of data being shared are not abstract. Customer information. Internal financials. Source code. Strategic plans. Employee records. Legal documents. Client communications. The 225,000 OpenAI credentials for sale on dark-web markets in 2025, harvested by infostealers from compromised employee devices, sit on top of that. So do the IBM figures showing twenty percent of organisations had a confirmed shadow-AI-related breach last year.
The numbers matter, but the headline ones aren’t the most interesting. The interesting one is the curve. Eleven to thirty-five percent in two years, on the same workforce, with broadly the same tooling. That tells you something about workflow, not just security. Your team has voted with their browser tabs that AI helps them do their job, and they did it without asking, because asking takes longer than just doing.
Treating that as a governance failure is technically true and practically useless. The governance was always going to fail against that level of behavioural shift, because policy alone can’t out-compete a tool that delivers a better result in twenty seconds for free. The frame that fits the situation is product fit. Your official AI offering is competing for attention against ChatGPT, Claude, and Perplexity, and right now it’s losing on speed, friction, and quality. That’s a different problem to solve.
What is your team actually telling you?
When an employee chooses an unsanctioned tool over the one you provide, read it as product feedback. The pattern of where shadow AI is heaviest reveals the workflows that matter to the team and the friction your governance has quietly installed. The unofficial path was faster, or simpler, or better-fitting than the official one. That’s information about your tools.
The freecodecamp piece on this puts it cleanly. When the gap between the official tools and the external ones gets too large, employees choose speed. That’s optimisation, not insubordination. Your team is voting on the user experience of your governance, and the votes are clear.
The Hacker News thread on shadow AI a few months ago had a Ferrari-and-Prius metaphor that’s stuck with me. The Ferrari is what your team would drive if you let them. The Prius is the gated enterprise tool with the request form, the audit trail, and the mandatory training module. Both will get them to the meeting. One will get them there happy. Right now you’re paying for the Prius, and they’re using their own Ferrari on weekends.
If you’re a founder and you discover this in your business, the framing matters. Treating the analyst who used personal ChatGPT as a problem solves nothing, because the next analyst will do exactly the same when faced with the same friction. Treating the friction as the problem and the analyst as the messenger gives you both a fix and a reason for the team to trust the next official tool you roll out. You either lose institutional learning to a disciplinary process, or you gain the user research you’d otherwise have had to commission.
Where do the risks actually live?
The risks are real. Twenty percent of organisations had a confirmed shadow-AI-related breach last year. Customer data, source code, internal financial figures, strategic documents, all of those land in third-party model logs when staff paste them in. The risks need taking seriously. Where the response lands matters more than how strict the policy reads. Policy without product fit just pushes the behaviour further from view, and the risks get worse from there.
The risks are not evenly distributed. They land hardest in regulated industries, where a paste into ChatGPT can be a personal liability for the practitioner, not just a firm-level compliance issue. In healthcare, accounting, and law, the partner who pasted the client document carries the consequences along with the firm. That changes the calculation. It also changes the response, because policy in those industries needs to recognise that the practitioner is already exposed and the protection has to be tool-shaped rather than document-shaped.
In other settings the risks land more diffusely. A startup founder reading the IBM number might assume the breach is a public-facing leak. Most of them aren’t. They’re competitive intelligence going to vendors, internal financial figures landing in training data, customer information being processed by a tool that has a different retention policy than the firm’s own. Real, but quieter than a headline breach. The fix is to give the team a sanctioned tool that handles the routine cases inside your own walls, plus a short conversation about which categories of data have to stay there. Tighter policy alone does not get there.
How do you respond without making it worse?
The honest response runs in four moves. First, find out what’s actually being used. Second, provide the enterprise version of whatever the team is reaching for. Third, close the speed gap by removing approval-form friction for the routine cases. Fourth, train the team on what data goes where, so the few high-stakes cases get handled separately. Bans without alternatives just relocate the behaviour to whatever the next employee can find for free.
The first move is harder than it sounds. Most audits will tell you what’s allowed and not what’s actually happening. An anonymous survey usually gets you closer to the truth than a tooling audit, because people answer honestly when there’s no name on it. Ask which AI tools they’ve used in the last month for work, what they used them for, and what they tried before settling on the unofficial one. The answers shape the rest of the response.
The second move sounds expensive. It usually isn’t. Most enterprise versions of the tools the team is already using cost less than the legal and security work the breach risk implies. The bigger lift is the third move, where the speed gap closes. That means giving routine prompts the same friction as opening a browser tab, with logging and access controls invisible to the user. If the official tool requires a request form, a manager approval, and a separate login, you have built a Prius in a market that wants Ferraris. The team will keep finding Ferraris.
The fourth move is the easiest to skip and the one that prevents the worst outcomes. Twenty minutes of training on what categories of data go where covers more risk than fifty pages of policy. The hard cases are usually well-understood once you name them. Customer PII, regulated client data, source code, board-level financial documents, M&A discussions. Those go through the sanctioned path. Most other things don’t need that level of friction, and pretending they do is what created the shadow in the first place.
Six months later
The founder I started with is now six months past the discovery, and the picture has changed. The official tool moved to the front of the team’s workflow with the approval-form friction stripped out. The analysts who’d been using personal accounts now use the enterprise version. One of them runs the internal AI working group. The breach risk dropped. The signal got read.
If you suspect this is happening in your business, you’re almost certainly right, and the question to ask yourself first is what your team is trying to tell you. Book a conversation if you’d rather not figure that out alone.



