When you must label AI-generated content

An estate agent MD and a marketing manager looking at a tablet showing a property image, the marketing manager pointing at the screen, brochures on the desk
TL;DR

AI-generated content does not always need to be labelled, but it always needs to have been considered. Three contexts trigger formal disclosure: customer-facing chatbots that engage end users, AI-generated content that could mislead users about its origin, and any AI-generated content seen by EU users where the EU AI Act limited-risk transparency rule applies. Beyond the legal obligations, the reputational case for clear AI disclosure is increasingly strong with customers who care about authenticity.

Key takeaways

- The EU AI Act limited-risk transparency obligation requires customer-facing chatbots that engage EU users to disclose AI involvement, and requires AI-generated content that could mislead to be labelled. - Three SME triggers: customer-facing chatbots, AI-generated marketing imagery (especially in regulated markets or where representation accuracy matters), AI-generated reviews or testimonials presented as human. - The ICO transparency principle applies where AI generates content from or about identified individuals. Privacy notice and disclosure-at-point-of-use should be aligned. - The internal-use-only escape: AI-generated content used only inside the firm (drafting that gets rewritten, brainstorming, internal communication) does not trigger the rule. - The reputational dimension: customers in 2026 increasingly care whether work was human-made. Authentic disclosure of AI use can become a positive trust signal; hidden AI use, when discovered, is a trust-erosion event. - ASA and CAP code on misleading representation in advertising apply on top of GDPR. Property listing imagery, financial product imagery, and consumer-facing claims need accuracy.

The MD of an 11-person estate agency reviewing the firm’s quarterly Instagram output with the marketing manager. The marketing manager flicks through 18 property-listing images. Six of them are AI-enhanced or AI-generated: virtual staging, decluttered rooms, lifestyle compositions. None are labelled. The marketing manager asks: “Should we be putting a tag on these?” The MD pauses. He has not thought about it. He has heard of the EU AI Act in passing. The agency has a small Berlin client base he had forgotten counted under EU law.

The conversation surfaces a governance question most SMEs have never asked. AI-generated content does not always need to be labelled. It always needs to have been considered. The EU AI Act’s transparency obligations and ICO guidance both name specific contexts where disclosure is required, and the reputational risk of undisclosed AI use is rising fast in markets where customers care about authenticity.

What does the EU AI Act actually require?

The EU AI Act’s limited-risk transparency obligation, set out in Article 50, requires two things relevant to most SMEs. First, any AI system that engages end users (notably chatbots) must disclose that the user is interacting with an AI. Second, AI-generated content (text, image, audio) used in ways that could mislead must be labelled as AI-generated. The act came into force June 2024, with enforcement starting through 2025-2026.

For UK SMEs, the act’s extraterritorial reach matters. Any UK firm whose AI system interacts with EU users falls under the act regardless of where the firm is based. The estate agency above, with a small Berlin client base, is in scope for any AI-generated imagery seen by those clients. So is any UK consultancy with EU customers, any UK e-commerce firm shipping to EU addresses, any UK SaaS company with EU users.

What does the ICO add?

The ICO’s transparency principle under UK GDPR is the parallel UK obligation. Where AI generates content from or about identified individuals, the privacy notice must say so. Where AI is used in customer-facing interactions that affect individuals’ rights or experience, the firm should disclose at the point of use. The ICO has not published a single rule equivalent to Article 50, but the transparency principle reaches similar ground.

The practical effect for SMEs is that even a UK-only firm with no EU customers should align with the same disclosure standard, because the ICO position is converging with the EU position and because customer expectations are doing so faster than either regulator.

Where do the three SME triggers actually hit?

The first trigger is customer-facing chatbots. If an SME runs an AI-powered chatbot on its website, the chatbot must clearly identify itself as AI and provide a path to a human. The opener “I’m a virtual assistant; type human to reach our team” is the standard format. Many UK SMEs have deployed chatbots without this clarity, accumulating exposure under both the EU AI Act and ICO transparency guidance.

The second trigger is AI-generated marketing imagery. In regulated markets, labelling protects against deception claims; in all markets, labelling matters where the image misrepresents what is on offer. Property listings that are AI-staged or virtually decluttered should be labelled because the imagery materially affects a buyer’s decision. The ASA and CAP code on misleading advertising apply on top of GDPR.

The third trigger is AI-generated reviews, testimonials, or “human” content. Presenting AI-generated content as human work is a deception risk. The FTC has signalled enforcement appetite here for US-customer firms; the ASA has similar ground in the UK. The rule is straightforward: AI-generated content that is intended to be read as human work should be disclosed or rewritten.

What is the internal-use-only escape?

AI-generated content used only inside the firm does not trigger the rule. AI used to draft material that gets rewritten before publication. AI used for brainstorming or idea generation. AI used for internal communication, internal analysis, or work that does not reach customers. The trigger is publication or external customer interaction.

This permission frame matters because it stops SMEs from over-correcting toward labelling everything. The estate agency’s AI-staged property images are external and customer-facing, so the labelling question applies. The same agency’s internal AI-drafted memo on quarterly performance is internal, so the rule does not apply.

The line is sharper than it sounds. AI used to draft a customer email that the staff member then rewrites in their own words counts as AI-assisted drafting from the customer’s perspective. AI used to generate a customer email that is sent verbatim counts as AI-generated content, and the disclosure question applies.

What does good labelling actually look like?

Different formats fit different contexts. For images, an “AI-generated” or “AI-assisted” tag near the image. For chatbots, an opening line identifying the AI and providing a human-escalation path. For systematic AI use across the firm, a paragraph in the privacy notice or terms of service. For high-stakes content (financial product imagery, healthcare communications, property listings with virtual staging), a clear visual or textual disclosure aligned with sector regulator expectations.

The general principle is that the disclosure should be visible at the point where the customer encounters the content, not buried in terms-of-service deep on the website. A chatbot disclosure on the chatbot itself. An image label on the image. A virtual-staging note on the listing page near the image, not on a separate FAQ.

What about the reputational dimension beyond the law?

The legal floor is one consideration. The customer-trust ceiling is another. Customers in 2026 are increasingly interested in whether the work they bought was human-made. Authentic disclosure of AI use can become a positive trust signal. Hidden AI use, when discovered, is a trust-erosion event that damages the firm’s reputation in ways the regulator never sees.

Some firms are deciding to disclose more than the law requires, on the basis that early movers on transparency build trust faster than late movers compelled by enforcement. The position is debatable, but the direction of travel is one most firms should think about.

If you are running a firm with customer-facing AI in any form, and you would like to talk through what the labelling rule means for the specific deployments you have, book a conversation.

Sources

  • EU AI Act overview. Source.
  • EU AI Act Article 50 transparency obligations. Source.
  • ICO guidance on AI and personal data. Source.
  • FTC Endorsement Guides. Source.
  • ASA and CAP code (UK advertising standards). Source.
  • National Institute of Standards and Technology (2023). AI Risk Management Framework (AI RMF 1.0). Establishes measurement rigour and uncertainty quantification as core governance practice. Source.
  • National Association of Corporate Directors (2025). AI Friend and Foe, Director's Handbook on AI Oversight. Foundational governance principles for board-level AI oversight, transparency, risk frameworks and stakeholder communication. Source.
  • Chartered Governance Institute UK (2024). Artificial Intelligence and the Governance Professional. UK governance perspective on lawful, ethical and responsible AI use embedded within risk management frameworks. Source.

Frequently asked questions

When does the EU AI Act require me to label AI-generated content?

Two main triggers under the limited-risk transparency obligation. Any AI system that engages EU end users (notably chatbots) must disclose that the user is interacting with AI. AI-generated content (text, image, audio) used in ways that could mislead must be labelled as AI-generated. The act came into force June 2024 with enforcement starting 2025-2026.

What about AI-generated marketing imagery?

In regulated markets (financial services, healthcare, consumer-protection-sensitive contexts), labelling protects against deception claims. In all markets, label if the image misrepresents what is on offer. Property-listing images that are AI-staged or virtually-decluttered should be labelled because they materially affect a buyer's decision. ASA and CAP code on misleading advertising apply on top of GDPR.

When is the free-tier AI fine to use without labelling?

For internal-use-only content. AI used to draft material that gets significantly rewritten before publication. AI used for brainstorming or idea generation. AI used for internal communication or analysis that does not reach customers. The labelling rule activates at the point of publication or external customer interaction, not at the moment of AI use.

What disclosure formats actually work?

An 'AI-generated' or 'AI-assisted' tag on images. A clear chatbot opener like 'I'm a virtual assistant; type human to reach our team'. A disclosure paragraph in the firm's terms or privacy notice on systematic AI use. A clear statement on the website naming the firm's AI uses. The format that fits the context: badges for visual content, opening lines for chatbots, paragraphs for systematic disclosure.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation