An owner I spoke with last month has been drafting client work with AI assistance for the better part of a year. Research, first drafts, the boring parts of a report. She has never mentioned it to her largest client. The client has never asked. The work has been good and the relationship has been strong. Two weeks ago a friend in the same sector told her that her own clients had started asking. Now the owner cannot decide which is the worse outcome, telling the client and watching them flinch, or having the client find out later from someone else.
She is sitting where a meaningful share of professional services owners are sitting right now. The chatbots are useful, the work is shipping, and the question of what to say out loud has never quite been answered. It is not a single legal rule. It is not a single market norm. It is a mix of regulation that is hardening fast, contractual practice that is already settling, and client expectations that are climbing year on year. The shape of the right answer is clearer than it looks, and the wrong moves come in two flavours.
What does the law actually require right now?
For a typical UK SME, there is no single statute in May 2026 that says you must tell clients you use AI. The position changes on 2 August 2026 when Article 50 of the EU AI Act takes effect. That rule mandates disclosure for any AI system that interacts directly with people, and labelling for any synthetic content the system produces. UK firms serving EU clients are in scope.
In the UK, regulators are working through their existing remits rather than waiting for a single AI law. The SRA tells solicitors that it should always be made clear to clients where they are interfacing with AI. The ICO holds the line on transparency under UK GDPR whenever personal data is processed. The FCA expects disclosure for AI in customer-facing decisions that materially affect pricing, suitability or credit. The ASA, the RICS and the CMA have each said something similar in their own language.
Why does this matter for your business?
It matters because the two failure modes both cost money, and they cost it in different ways. Over-disclosing invites the client to ask why the fee has not dropped, because they read it as a sign your work has become cheaper. Under-disclosing reads as concealment when the client finds out from a competitor or a tell in a deliverable, and is harder to recover from than the original conversation would have been.
The over-disclosure case is well documented. Research from the University of East London on AI-driven pricing found that customers react strongly when they think a provider has captured an efficiency gain rather than shared it. A founder anxious to be seen as honest writes a paragraph in the proposal that reads as if half the work is done by ChatGPT, and the fee conversation tilts against them before the project starts.
The under-disclosure case has a textbook example too. In February 2023 the Peabody Office at Vanderbilt University sent a mass email in response to the Michigan State University shooting, with a footnote noting that ChatGPT had helped draft it. An apology email had to follow within days. The pattern repeats in client work. The 2024 Salesforce State of the AI Connected Customer report found trust in businesses to use AI ethically dropping from 58 percent to 42 percent in a single year, and 72 percent of customers saying it is important to know when they are interacting with AI.
Where will you actually meet it in your business?
You will meet it in five places, only one of which is the dramatic live conversation. The engagement letter is first and carries the heaviest weight, because it sets the expectation before the client has a stake in objecting. The scope of work is second. Terms of business and website copy come third. The existing client base is fourth, and the live question at coffee is fifth.
The five surfaces are not equally urgent. The engagement letter rewrites itself once and protects every new piece of work that follows it. The scope of work and terms of business sit downstream of that, and tend to fall into line once the engagement letter is settled. The existing client base is the larger and more delicate piece of work, because there is already context that has not been shared. The live moment is a low-probability event that becomes lower if the other four are in good order.
When should you say it, and when can you let it pass?
The professions have settled on a general-level disclosure rather than a line-by-line one. A short paragraph in the engagement letter names AI use in research, drafting and administrative tasks, names the human review process, confirms the firm retains full professional responsibility, and rules out client data going into public tools without consent. You disclose the policy, not the keystroke.
The big-firm precedent now points the same way. Slaughter and May publicly announced its firm-wide adoption of the Harvey AI platform in April 2026. Clifford Chance has published AI principles that commit the firm to telling clients how their information is being used when AI is involved. The Texas Bar Practice has issued three sample engagement-letter clauses now used as templates across UK and US practice. The common shape sits inside four lines, the firm may use AI, a qualified person reviews the work, the firm remains fully responsible, client data does not enter public systems without consent.
The harder case is the existing client who has never been told. The strongest move is to open the conversation yourself in a regular review or a short note, before they ask. The script is short. You have been using AI tools to speed up research and drafting. Every piece of work has been reviewed by a person before it reached them. The firm is still fully responsible. Their confidential data has not been pasted into any public system. Then you ask whether they have any preferences for how you handle it from here. Going to the client with the answer is far stronger than being asked the question. The same logic sits underneath when to label AI-generated content, which covers the asset-level version of the same call.
Related concepts you will want to cross-check
Two related areas sit next to this one and are easy to confuse with it. The first is ownership. Disclosing AI use is a separate question from who owns the resulting work, covered in who owns the work when AI wrote it. The second is data handling, telling the client you use AI is one move, telling them what happens to their data is another.
The data-handling piece is covered in where your data goes when you paste into a chatbot. A clean disclosure paragraph in the engagement letter does the visible part of the work. A clean data policy behind it does the work that holds up when a regulator or a careful general counsel asks the next question. The SRA has been explicit that confidentiality cannot be outsourced, and the same principle applies whether or not your firm sits inside a regulated profession.
If you are sitting on a client base you have never told, an engagement letter that does not mention AI, or a deliverable about to go out that has more of the model in it than feels comfortable, book a conversation.



