Talking to staff about AI use: encouragement, boundaries and what to forbid

An owner and five of her team sitting around a meeting room table mid-conversation, a flipchart on the wall behind them with three handwritten column headings, mugs of coffee on the table
TL;DR

The conversation an owner-led firm needs about staff AI use has three parts: what we encourage and why, what we are testing together this quarter, and what we never do regardless of how convenient. Drafting the never-do list with the team produces a policy people actually follow. It takes one structured meeting plus a quarterly review and replaces silence, blanket encouragement, or blanket bans with something a 5 to 50 person firm can sustain.

Key takeaways

- The default settles into one of three failures, silence (which creates shadow AI), blanket encouragement (which creates uneven exposure), or blanket suppression (which drives use underground). All three leave the firm in a worse place than a structured conversation would. - The proportionate conversation has three parts, what we encourage and why, what we are testing together, and what we never do regardless of how convenient. The shape is small enough to hold in one team meeting and is the spine the rest of governance attaches to. - The never-do list is drafted with the team, not imposed on it. People defend rules they helped write, and the team will surface specific risks the owner did not know existed, including the practical question of which paid tier the firm will pay for. - The conversation costs the firm something in exchange. Paid-tier access to approved tools, an hour a quarter of training time, and a clear escalation route when staff are unsure. Without those three, the never-do list is unenforceable. - The conversation is not a one-off. A 30-minute quarterly check-in, with the same three questions each time, keeps the policy honest as new tools arrive and the team's actual use evolves.

The owner of a 17-person services firm is standing in her kitchen on a Sunday evening, working out what to say at the Monday morning team meeting. She knows two of her account managers are using ChatGPT every day. She suspects a third is pasting client briefs into Claude. Nobody has asked her permission and she has not given any. She has not banned anything either. The silence between her and the team about AI is now louder than any policy would be, and she has decided this week is the week to break it. She has not yet worked out how.

This is where many owner-led firms are sitting in 2026. The team is already using AI. The owner knows they are, in roughly the way she knows what time everyone arrives at work, by feel rather than by record. The conversation has not happened because there is no easy script for it. So the default settles into one of three shapes, and each one fails in a specific way.

What is the conversation about AI use actually about?

It is the explicit version of a conversation the team is already having implicitly, every day, without the owner in the room. The job is to bring it into the open in a form the firm can act on. The shape that works has three parts, what the firm encourages and why, what the firm is testing together this quarter, and what the firm never does regardless of how convenient the moment looks.

The three-part shape is small enough to hold in one meeting and clear enough to write on a single page afterwards. It replaces silence with structure without tipping the firm into either uncritical enthusiasm or reflexive prohibition. The encouragement list legitimises everyday use. The testing list channels curiosity into managed experiments. The never-do list draws the firm’s hard lines and explains why they are hard.

Owners often hesitate at the encouragement list, worried that naming AI as acceptable will look like an endorsement of every possible use. The point of the testing list is that it absorbs that worry. Anything the firm is not yet sure about lives there, with a review date attached, until it earns a place on one of the other two lists.

Why does this matter for your business?

It matters because the three defaults all fail and the firm is paying for one of them right now. Silence creates shadow use. Menlo Security’s 2025 research found 68 percent of employees use free-tier AI through personal accounts and 57 percent of those input sensitive data into them. Blanket encouragement creates uneven capability and uneven exposure. Blanket suppression converts visible use into invisible use, which is harder to manage.

There is a quieter reason it matters, captured in Slack’s Fall 2024 Workforce Index. The Index found that 48 percent of desk workers feel uncomfortable admitting to their manager that they have used AI for a routine task. The top reasons were shame about feeling like cheating, anxiety about being seen as less competent, and worry about looking lazy. The team is anxious, the owner is silent, and the silence is read as a verdict the owner has not actually given. The cost of leaving that gap undone is paid in trust as much as in risk. The cost of closing it is a single morning of structured conversation.

Where will you actually meet it?

You meet it in five places. The Monday after the team meeting, when an account manager asks whether the new tool she found counts as testing or as never-do. The procurement conversation, when an existing vendor adds AI features and the firm has to decide whether to switch them on. The customer call, when a client asks “do you use AI on our work” and you need a single accurate sentence to answer.

You meet it again at the quarterly review, where the three lists get checked against what actually happened in the three months since the last one. You meet it a fifth time when a new joiner arrives and the conversation has to be rerun in five minutes, on a printed page, without the original meeting. Each of those five touchpoints is cheap if the conversation has happened and expensive if it has not. The page is the artefact. The conversation is the thing that made the page worth the paper.

When to ask the team versus when to decide it yourself

Ask the team when drafting the never-do list, when picking which uses go on the testing list, and when scoping what training and which paid-tier tools the firm will fund. People defend rules they helped write, and the team will surface specific risks an owner sitting alone with a Word document will miss, including the practical question of which paid tier the firm needs in order to make the policy enforceable.

Decide it yourself on the bits where the firm carries the legal or reputational weight alone. Whether the firm discloses AI use to customers, what counts as a hard never (client personal data into a free-tier chatbot, source code into a public model, AI-only customer decisions on refunds or eligibility), and what the firm spends on paid tools and training. The Samsung 2023 source-code leak and the Air Canada chatbot ruling are the two concrete cases worth bringing into the room so the never-do list is anchored in real incidents rather than abstract caution. Acas guidance on consultation and change is explicit that consultation improves working relationships and produces better employer decisions, and the participative never-do list is the part of this conversation where that principle pays the firm back the most.

This conversation sits inside the same cluster as the proportionate AI governance framework for owner-led firms and the minimum viable AI policy that the conversation produces. It pairs with the shadow AI as feedback lens for reading unsanctioned use as diagnostic information about unmet need, and with the default ban backfires post for the inverse argument on why blanket suppression makes the problem worse.

For the technical decisions the never-do list rests on, read free versus paid AI tier privacy and where your data goes when you paste into a chatbot. For the regulatory backdrop, the EU AI Act for UK and EU SMEs and the UK pro-innovation pivot cover the obligations the conversation has to respect.

The conversation does not replace employment law advice for any change that materially affects a role, sector-specific regulatory guidance, or specialist data protection counsel for heavy processing. It is the thing those harder conversations attach to. If the team’s AI use has been a silence for months and you want to talk about how to break it without tipping the firm into the opposite failure mode, book a conversation.

Sources

- Information Commissioner's Office. Guidance on AI and data protection, the UK reference for any firm whose staff input personal data into AI tools. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/ - Acas. Consultation and change, the UK employment advisory service guidance on involving workers in policy decisions, which the participative never-do list draws on directly. https://www.acas.org.uk/consultation-and-change - Acas (2024). One in four UK workers worry that AI will lead to job losses, the survey that frames the anxiety side of the conversation. https://www.acas.org.uk/1-in-4-workers-worry-that-ai-will-lead-to-job-losses - Menlo Security (2025). Report on the 68 percent surge in shadow generative AI usage in the modern enterprise, the source for the baseline shadow-use figure. https://www.menlosecurity.com/press-releases/menlo-securitys-2025-report-uncovers-68-surge-in-shadow-generative-ai-usage-in-the-modern-enterprise - Slack (2024). The Fall 2024 Workforce Index, source for the finding that 48 percent of desk workers feel uncomfortable admitting AI use to their manager. https://slack.com/blog/news/the-fall-2024-workforce-index-shows-executives-and-employees-investing-in-ai-but-uncertainty-holding-back-adoption - Amy Edmondson, Harvard Business Review (2025). In tough times, psychological safety is a requirement not a luxury, the source for the psychological safety conditions a working AI conversation depends on. https://hbr.org/2025/11/in-tough-times-psychological-safety-is-a-requirement-not-a-luxury - Scottish AI Playbook case study (2024). Financial Times, Building an AI-fluent workforce for a responsible future, a worked example of structured staff conversation, principles and layered training. https://www.scottishaiplaybook.com/case-studies/financial-times-building-an-ai-fluent-workforce-for-a-responsible-future - Zapier (2024). How Zapier rolled out AI internally, the case study on top-down plus bottom-up adoption, AI enablement hubs and tracked quarterly metrics. https://zapier.com/blog/how-zapier-rolled-out-ai/ - Airia (2025). Why AI bans backfire in enterprise strategy, the analysis of how blanket suppression converts visible AI use into invisible underground use. https://airia.com/why-ai-bans-backfire-enterprise-strategy/ - Cybernews (2023). The Samsung ChatGPT leak explained, the canonical example used to anchor the never-do list against a concrete incident. https://cybernews.com/security/chatgpt-samsung-leak-explained-lessons/

Frequently asked questions

How long does this conversation actually take?

One structured team meeting of 60 to 90 minutes to draft the three lists with the team, then a quarterly 30-minute review. The first session takes longer because there are real questions to work through, particularly on the never-do list and the paid-tier decision. After that the rhythm is light. Total time across the year is roughly four hours of owner time and four hours of team time. Most firms find this cheaper than the cost of one shadow AI incident.

What if my team is already using AI without permission?

Assume they are. Menlo Security's 2025 research found that 68 percent of employees use free-tier AI tools through personal accounts, and 57 percent of those input sensitive data. Open the conversation by naming this directly, without an accusatory tone, and ask the team what they are using and what for. Treat the answers as diagnostic information about unmet need, not as misconduct. The conversation is the route from invisible use to managed use.

Do I need a written policy or is the conversation enough?

Write down the three lists, on one page, signed by you. The conversation is what produces the policy. The policy is what the firm can point to when a customer asks, when a new joiner needs onboarding, or when the team has forgotten what was agreed three months ago. Without the written page the conversation fades. Without the conversation the page is unloved.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation