The owner of a 17-person services firm is standing in her kitchen on a Sunday evening, working out what to say at the Monday morning team meeting. She knows two of her account managers are using ChatGPT every day. She suspects a third is pasting client briefs into Claude. Nobody has asked her permission and she has not given any. She has not banned anything either. The silence between her and the team about AI is now louder than any policy would be, and she has decided this week is the week to break it. She has not yet worked out how.
This is where many owner-led firms are sitting in 2026. The team is already using AI. The owner knows they are, in roughly the way she knows what time everyone arrives at work, by feel rather than by record. The conversation has not happened because there is no easy script for it. So the default settles into one of three shapes, and each one fails in a specific way.
What is the conversation about AI use actually about?
It is the explicit version of a conversation the team is already having implicitly, every day, without the owner in the room. The job is to bring it into the open in a form the firm can act on. The shape that works has three parts, what the firm encourages and why, what the firm is testing together this quarter, and what the firm never does regardless of how convenient the moment looks.
The three-part shape is small enough to hold in one meeting and clear enough to write on a single page afterwards. It replaces silence with structure without tipping the firm into either uncritical enthusiasm or reflexive prohibition. The encouragement list legitimises everyday use. The testing list channels curiosity into managed experiments. The never-do list draws the firm’s hard lines and explains why they are hard.
Owners often hesitate at the encouragement list, worried that naming AI as acceptable will look like an endorsement of every possible use. The point of the testing list is that it absorbs that worry. Anything the firm is not yet sure about lives there, with a review date attached, until it earns a place on one of the other two lists.
Why does this matter for your business?
It matters because the three defaults all fail and the firm is paying for one of them right now. Silence creates shadow use. Menlo Security’s 2025 research found 68 percent of employees use free-tier AI through personal accounts and 57 percent of those input sensitive data into them. Blanket encouragement creates uneven capability and uneven exposure. Blanket suppression converts visible use into invisible use, which is harder to manage.
There is a quieter reason it matters, captured in Slack’s Fall 2024 Workforce Index. The Index found that 48 percent of desk workers feel uncomfortable admitting to their manager that they have used AI for a routine task. The top reasons were shame about feeling like cheating, anxiety about being seen as less competent, and worry about looking lazy. The team is anxious, the owner is silent, and the silence is read as a verdict the owner has not actually given. The cost of leaving that gap undone is paid in trust as much as in risk. The cost of closing it is a single morning of structured conversation.
Where will you actually meet it?
You meet it in five places. The Monday after the team meeting, when an account manager asks whether the new tool she found counts as testing or as never-do. The procurement conversation, when an existing vendor adds AI features and the firm has to decide whether to switch them on. The customer call, when a client asks “do you use AI on our work” and you need a single accurate sentence to answer.
You meet it again at the quarterly review, where the three lists get checked against what actually happened in the three months since the last one. You meet it a fifth time when a new joiner arrives and the conversation has to be rerun in five minutes, on a printed page, without the original meeting. Each of those five touchpoints is cheap if the conversation has happened and expensive if it has not. The page is the artefact. The conversation is the thing that made the page worth the paper.
When to ask the team versus when to decide it yourself
Ask the team when drafting the never-do list, when picking which uses go on the testing list, and when scoping what training and which paid-tier tools the firm will fund. People defend rules they helped write, and the team will surface specific risks an owner sitting alone with a Word document will miss, including the practical question of which paid tier the firm needs in order to make the policy enforceable.
Decide it yourself on the bits where the firm carries the legal or reputational weight alone. Whether the firm discloses AI use to customers, what counts as a hard never (client personal data into a free-tier chatbot, source code into a public model, AI-only customer decisions on refunds or eligibility), and what the firm spends on paid tools and training. The Samsung 2023 source-code leak and the Air Canada chatbot ruling are the two concrete cases worth bringing into the room so the never-do list is anchored in real incidents rather than abstract caution. Acas guidance on consultation and change is explicit that consultation improves working relationships and produces better employer decisions, and the participative never-do list is the part of this conversation where that principle pays the firm back the most.
Related concepts
This conversation sits inside the same cluster as the proportionate AI governance framework for owner-led firms and the minimum viable AI policy that the conversation produces. It pairs with the shadow AI as feedback lens for reading unsanctioned use as diagnostic information about unmet need, and with the default ban backfires post for the inverse argument on why blanket suppression makes the problem worse.
For the technical decisions the never-do list rests on, read free versus paid AI tier privacy and where your data goes when you paste into a chatbot. For the regulatory backdrop, the EU AI Act for UK and EU SMEs and the UK pro-innovation pivot cover the obligations the conversation has to respect.
The conversation does not replace employment law advice for any change that materially affects a role, sector-specific regulatory guidance, or specialist data protection counsel for heavy processing. It is the thing those harder conversations attach to. If the team’s AI use has been a silence for months and you want to talk about how to break it without tipping the firm into the opposite failure mode, book a conversation.



