Your team is using AI you didn't sanction. Here's what that actually tells you.

A founder paused at their desk reading something on a laptop, hand on keyboard, with an open notebook and mug nearby
TL;DR

When your team uses AI tools you didn't sanction, read it as product feedback about your official tooling. Respond by finding out what's actually being used, providing the enterprise version, closing the speed gap, and training the team on which data has to stay inside your walls.

Key takeaways

- Shadow AI use scaled from around eleven percent of employee ChatGPT inputs in 2023 to around thirty-five percent by 2025. Policy alone won't out-compete a tool that delivers a better result in twenty seconds for free. - The pattern of where shadow AI is heaviest is product feedback. It tells you which workflows matter to the team and where your sanctioned tools have built friction the unofficial ones haven't. - The risks are real, especially in regulated industries where the practitioner carries personal liability. The fix is tool-shaped: provide the sanctioned alternative, then make the data-handling rules concrete. - The four-move response: discover what's actually being used (anonymous survey beats audit), provide the enterprise version, close the speed gap on the routine cases, train the team on which categories of data have to stay inside your walls.

Picture a forty-person professional services firm. The founder is at the Tuesday standup when one of the analysts mentions, casually, that ChatGPT has been doing the first draft of every client brief for the last six months. Three of them, on personal accounts. His first thought is to call legal. His better thought, after he sleeps on it, is the simpler one: why didn’t they use the enterprise tool the firm pays for?

The answer is short, and it lands harder than the breach risk. The official tool is gated behind a request form, sits two clicks deeper than the browser, and is materially worse at the actual task. The analysts didn’t break the rule because they’re insubordinate. They broke it because the rule made the work slower.

Why is shadow AI now the default?

Shadow AI stopped being an edge case some time around 2024. In 2023, sensitive data made up around eleven percent of what employees pasted into ChatGPT. By 2025 that number was thirty-five percent. The behaviour scaled fast. The governance didn’t. Most firms still write policy as if shadow AI is something a few engineers do at the margins, when the data shows it’s now what most knowledge workers do most days.

The categories of data being shared are not abstract. Customer information. Internal financials. Source code. Strategic plans. Employee records. Legal documents. Client communications. The 225,000 OpenAI credentials for sale on dark-web markets in 2025, harvested by infostealers from compromised employee devices, sit on top of that. So do the IBM figures showing twenty percent of organisations had a confirmed shadow-AI-related breach last year.

The numbers matter, but the headline ones aren’t the most interesting. The interesting one is the curve. Eleven to thirty-five percent in two years, on the same workforce, with broadly the same tooling. That tells you something about workflow, not just security. Your team has voted with their browser tabs that AI helps them do their job, and they did it without asking, because asking takes longer than just doing.

Treating that as a governance failure is technically true and practically useless. The governance was always going to fail against that level of behavioural shift, because policy alone can’t out-compete a tool that delivers a better result in twenty seconds for free. The frame that fits the situation is product fit. Your official AI offering is competing for attention against ChatGPT, Claude, and Perplexity, and right now it’s losing on speed, friction, and quality. That’s a different problem to solve.

What is your team actually telling you?

When an employee chooses an unsanctioned tool over the one you provide, read it as product feedback. The pattern of where shadow AI is heaviest reveals the workflows that matter to the team and the friction your governance has quietly installed. The unofficial path was faster, or simpler, or better-fitting than the official one. That’s information about your tools.

The freecodecamp piece on this puts it cleanly. When the gap between the official tools and the external ones gets too large, employees choose speed. That’s optimisation, not insubordination. Your team is voting on the user experience of your governance, and the votes are clear.

The Hacker News thread on shadow AI a few months ago had a Ferrari-and-Prius metaphor that’s stuck with me. The Ferrari is what your team would drive if you let them. The Prius is the gated enterprise tool with the request form, the audit trail, and the mandatory training module. Both will get them to the meeting. One will get them there happy. Right now you’re paying for the Prius, and they’re using their own Ferrari on weekends.

If you’re a founder and you discover this in your business, the framing matters. Treating the analyst who used personal ChatGPT as a problem solves nothing, because the next analyst will do exactly the same when faced with the same friction. Treating the friction as the problem and the analyst as the messenger gives you both a fix and a reason for the team to trust the next official tool you roll out. You either lose institutional learning to a disciplinary process, or you gain the user research you’d otherwise have had to commission.

Where do the risks actually live?

The risks are real. Twenty percent of organisations had a confirmed shadow-AI-related breach last year. Customer data, source code, internal financial figures, strategic documents, all of those land in third-party model logs when staff paste them in. The risks need taking seriously. Where the response lands matters more than how strict the policy reads. Policy without product fit just pushes the behaviour further from view, and the risks get worse from there.

The risks are not evenly distributed. They land hardest in regulated industries, where a paste into ChatGPT can be a personal liability for the practitioner, not just a firm-level compliance issue. In healthcare, accounting, and law, the partner who pasted the client document carries the consequences along with the firm. That changes the calculation. It also changes the response, because policy in those industries needs to recognise that the practitioner is already exposed and the protection has to be tool-shaped rather than document-shaped.

In other settings the risks land more diffusely. A startup founder reading the IBM number might assume the breach is a public-facing leak. Most of them aren’t. They’re competitive intelligence going to vendors, internal financial figures landing in training data, customer information being processed by a tool that has a different retention policy than the firm’s own. Real, but quieter than a headline breach. The fix is to give the team a sanctioned tool that handles the routine cases inside your own walls, plus a short conversation about which categories of data have to stay there. Tighter policy alone does not get there.

How do you respond without making it worse?

The honest response runs in four moves. First, find out what’s actually being used. Second, provide the enterprise version of whatever the team is reaching for. Third, close the speed gap by removing approval-form friction for the routine cases. Fourth, train the team on what data goes where, so the few high-stakes cases get handled separately. Bans without alternatives just relocate the behaviour to whatever the next employee can find for free.

The first move is harder than it sounds. Most audits will tell you what’s allowed and not what’s actually happening. An anonymous survey usually gets you closer to the truth than a tooling audit, because people answer honestly when there’s no name on it. Ask which AI tools they’ve used in the last month for work, what they used them for, and what they tried before settling on the unofficial one. The answers shape the rest of the response.

The second move sounds expensive. It usually isn’t. Most enterprise versions of the tools the team is already using cost less than the legal and security work the breach risk implies. The bigger lift is the third move, where the speed gap closes. That means giving routine prompts the same friction as opening a browser tab, with logging and access controls invisible to the user. If the official tool requires a request form, a manager approval, and a separate login, you have built a Prius in a market that wants Ferraris. The team will keep finding Ferraris.

The fourth move is the easiest to skip and the one that prevents the worst outcomes. Twenty minutes of training on what categories of data go where covers more risk than fifty pages of policy. The hard cases are usually well-understood once you name them. Customer PII, regulated client data, source code, board-level financial documents, M&A discussions. Those go through the sanctioned path. Most other things don’t need that level of friction, and pretending they do is what created the shadow in the first place.

Six months later

The founder I started with is now six months past the discovery, and the picture has changed. The official tool moved to the front of the team’s workflow with the approval-form friction stripped out. The analysts who’d been using personal accounts now use the enterprise version. One of them runs the internal AI working group. The breach risk dropped. The signal got read.

If you suspect this is happening in your business, you’re almost certainly right, and the question to ask yourself first is what your team is trying to tell you. Book a conversation if you’d rather not figure that out alone.

Sources

  • ICO (2024). Consultation series on generative AI and data protection: confirmation that all UK GDPR principles apply to generative AI deployment, including lawful basis, data minimisation, and transparency. Anchor for the "no AI exemption to data protection law" frame. ico.org.uk
  • NCSC. Principles for the Security of Machine Learning, expanded into the Guidelines for Secure AI System Development. The UK national cyber-security view on machine-learning system risk. ncsc.gov.uk
  • UK government (2025). Cyber Security Code of Practice for AI: thirteen principles covering the AI lifecycle, with implementation guide. The current UK voluntary baseline for AI security. hunton.com
  • Microsoft (2025). Work Trend Index across 31 countries: 75 per cent of knowledge workers use AI at work and 78 per cent bring their own AI tools, rising to 80 per cent in small and medium-sized firms. The base rate for shadow-AI prevalence in knowledge work. microsoft.com
  • Salesforce (2024). Generative AI at work survey, 14,000 workers across 14 countries: 28 per cent use generative AI at work and over half do so without employer approval. The shadow-AI as default-not-edge-case finding. salesforce.com
  • IBM (2025). Cost of a Data Breach Report: 97 per cent of organisations reporting AI-related security incidents lacked proper AI access controls; shadow-AI breaches cost an average of 670,000 dollars more than other incidents. The hard cost of unsanctioned AI usage. ibm.com
  • Reco AI (2025). State of Shadow AI report: organisations see only around 12 per cent of shadow-AI usage in their estate, and firms with 11 to 50 employees average 269 shadow-AI tools per 1,000 employees. The discovery-gap evidence. reco.ai
  • Group-IB (2023). Stealer-infected devices with saved ChatGPT credentials traded on dark web markets: 101,134 compromised accounts identified between June 2022 and May 2023, with the Asia-Pacific region accounting for 40.5 per cent. The credential-leak threat surface. group-ib.com
  • Harvard Business Review (2026). The hidden demand for AI inside your company: shadow AI as latent demand that smart firms surface and channel rather than suppress. Frames the diagnostic-not-threat reading the post argues for. hbr.org
  • MIT Sloan Management Review. The human side of AI adoption, lessons from the field: top-down rollouts that ignore where the team is already using AI tend to fail; bottom-up rollouts that follow existing usage tend to succeed. The product-feedback frame. sloanreview.mit.edu

Frequently asked questions

How worried should I be about shadow AI in my team?

Take the risks seriously without treating it as a discipline problem. The leakage is real, especially in regulated industries, but the underlying signal is that your sanctioned tool is slower or worse-fitting than what staff can find for free. Address both.

Should I just ban personal ChatGPT use?

Bans without sanctioned alternatives don't hold. They push the behaviour further from view, which makes the risks worse. Provide the enterprise version of whatever your team is reaching for, then make the data-handling rules concrete for the cases that matter.

How do I find out what shadow AI is actually being used?

Run an anonymous survey, not an audit. Audits surface what's allowed; surveys surface what's happening. Ask which tools, for which tasks, and what staff tried before settling on the unofficial one.

What categories of data should never go into an unsanctioned AI tool?

Customer PII, regulated client data, source code, board-level financial documents, M&A discussions. Most other things don't need that level of friction. The skill is in being concrete about which categories matter, then writing a short policy that holds.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation