The MD of an 11-person estate agency reviewing the firm’s quarterly Instagram output with the marketing manager. The marketing manager flicks through 18 property-listing images. Six of them are AI-enhanced or AI-generated: virtual staging, decluttered rooms, lifestyle compositions. None are labelled. The marketing manager asks: “Should we be putting a tag on these?” The MD pauses. He has not thought about it. He has heard of the EU AI Act in passing. The agency has a small Berlin client base he had forgotten counted under EU law.
The conversation surfaces a governance question most SMEs have never asked. AI-generated content does not always need to be labelled. It always needs to have been considered. The EU AI Act’s transparency obligations and ICO guidance both name specific contexts where disclosure is required, and the reputational risk of undisclosed AI use is rising fast in markets where customers care about authenticity.
What does the EU AI Act actually require?
The EU AI Act’s limited-risk transparency obligation, set out in Article 50, requires two things relevant to most SMEs. First, any AI system that engages end users (notably chatbots) must disclose that the user is interacting with an AI. Second, AI-generated content (text, image, audio) used in ways that could mislead must be labelled as AI-generated. The act came into force June 2024, with enforcement starting through 2025-2026.
For UK SMEs, the act’s extraterritorial reach matters. Any UK firm whose AI system interacts with EU users falls under the act regardless of where the firm is based. The estate agency above, with a small Berlin client base, is in scope for any AI-generated imagery seen by those clients. So is any UK consultancy with EU customers, any UK e-commerce firm shipping to EU addresses, any UK SaaS company with EU users.
What does the ICO add?
The ICO’s transparency principle under UK GDPR is the parallel UK obligation. Where AI generates content from or about identified individuals, the privacy notice must say so. Where AI is used in customer-facing interactions that affect individuals’ rights or experience, the firm should disclose at the point of use. The ICO has not published a single rule equivalent to Article 50, but the transparency principle reaches similar ground.
The practical effect for SMEs is that even a UK-only firm with no EU customers should align with the same disclosure standard, because the ICO position is converging with the EU position and because customer expectations are doing so faster than either regulator.
Where do the three SME triggers actually hit?
The first trigger is customer-facing chatbots. If an SME runs an AI-powered chatbot on its website, the chatbot must clearly identify itself as AI and provide a path to a human. The opener “I’m a virtual assistant; type human to reach our team” is the standard format. Many UK SMEs have deployed chatbots without this clarity, accumulating exposure under both the EU AI Act and ICO transparency guidance.
The second trigger is AI-generated marketing imagery. In regulated markets, labelling protects against deception claims; in all markets, labelling matters where the image misrepresents what is on offer. Property listings that are AI-staged or virtually decluttered should be labelled because the imagery materially affects a buyer’s decision. The ASA and CAP code on misleading advertising apply on top of GDPR.
The third trigger is AI-generated reviews, testimonials, or “human” content. Presenting AI-generated content as human work is a deception risk. The FTC has signalled enforcement appetite here for US-customer firms; the ASA has similar ground in the UK. The rule is straightforward: AI-generated content that is intended to be read as human work should be disclosed or rewritten.
What is the internal-use-only escape?
AI-generated content used only inside the firm does not trigger the rule. AI used to draft material that gets rewritten before publication. AI used for brainstorming or idea generation. AI used for internal communication, internal analysis, or work that does not reach customers. The trigger is publication or external customer interaction.
This permission frame matters because it stops SMEs from over-correcting toward labelling everything. The estate agency’s AI-staged property images are external and customer-facing, so the labelling question applies. The same agency’s internal AI-drafted memo on quarterly performance is internal, so the rule does not apply.
The line is sharper than it sounds. AI used to draft a customer email that the staff member then rewrites in their own words counts as AI-assisted drafting from the customer’s perspective. AI used to generate a customer email that is sent verbatim counts as AI-generated content, and the disclosure question applies.
What does good labelling actually look like?
Different formats fit different contexts. For images, an “AI-generated” or “AI-assisted” tag near the image. For chatbots, an opening line identifying the AI and providing a human-escalation path. For systematic AI use across the firm, a paragraph in the privacy notice or terms of service. For high-stakes content (financial product imagery, healthcare communications, property listings with virtual staging), a clear visual or textual disclosure aligned with sector regulator expectations.
The general principle is that the disclosure should be visible at the point where the customer encounters the content, not buried in terms-of-service deep on the website. A chatbot disclosure on the chatbot itself. An image label on the image. A virtual-staging note on the listing page near the image, not on a separate FAQ.
What about the reputational dimension beyond the law?
The legal floor is one consideration. The customer-trust ceiling is another. Customers in 2026 are increasingly interested in whether the work they bought was human-made. Authentic disclosure of AI use can become a positive trust signal. Hidden AI use, when discovered, is a trust-erosion event that damages the firm’s reputation in ways the regulator never sees.
Some firms are deciding to disclose more than the law requires, on the basis that early movers on transparency build trust faster than late movers compelled by enforcement. The position is debatable, but the direction of travel is one most firms should think about.
If you are running a firm with customer-facing AI in any form, and you would like to talk through what the labelling rule means for the specific deployments you have, book a conversation.



