The EU AI Act, what UK and EU SMEs actually need to do

An owner sitting at her office desk in late afternoon reading a printed EU AI Act summary, pen in hand, with a checklist, a closed laptop and a mug of tea in front of her and a wall calendar visible behind
TL;DR

The EU AI Act applies to any business putting AI-touched products or services into the EU market, regardless of where the business is based. For most SMEs running chatbots, content generators, and productivity AI, the binding obligation is Article 50 transparency, which applies from 2 August 2026. High-risk obligations apply from 2 December 2027 for standalone systems and 2 August 2028 for AI embedded in regulated products, following the May 2026 Omnibus deal. Prohibited practices have been enforceable since 2 February 2025. Most SMEs sit firmly in the limited-risk band.

Key takeaways

- The EU AI Act applies extraterritorially. A UK firm with EU customers is in scope regardless of where the firm is based, on the same principle that brought UK firms under GDPR. - The Act uses four risk tiers. Unacceptable practices are banned, high-risk is heavily regulated, limited-risk requires transparency, minimal-risk has no specific AI Act obligations. The vast majority of SME use cases sit in limited or minimal. - Article 50 transparency is the binding obligation for most SMEs and applies from 2 August 2026. If you run a chatbot, generate synthetic content, or deploy a deepfake tool, users must know they are interacting with AI and AI-generated content must be machine-readably marked. - The May 2026 Omnibus deal pushed high-risk Annex III obligations to 2 December 2027 for standalone systems and 2 August 2028 for AI in regulated products. Article 50 was not delayed. - The Act provides SME relief: priority free sandbox access, reduced conformity assessment fees, simplified documentation for microenterprises, and dedicated national support channels. Penalties for SMEs are capped at the lower of the percentage or absolute amount applied to larger firms.

The owner of a 22-person UK marketing services firm has read three summaries of the EU AI Act this month. One told her she had nothing to worry about. One told her she might face fines of seven per cent of global turnover. The third told her she needed an EU-based legal representative by August. She serves clients in Dublin and Amsterdam, runs a chatbot on her own site, and uses generative tools in client work. She has no idea which of the three summaries to act on.

This is the position many UK and EU-based owners are sitting in. The Act is real, the deadlines are dated, and the scope is broader than most SMEs assume. The actual obligations on a typical service-led SME are narrower than the headline noise suggested, and they are knowable. The point of this post is to give you the precise version, deadline by deadline, so you can decide which conversations to have on Monday and which to leave for next quarter.

This is not legal advice. The Act has edge cases that warrant a specialist solicitor for any decision that matters. What follows is the orientation map.

What is the EU AI Act?

The EU AI Act is the world’s first comprehensive AI regulation. It came into force on 1 August 2024 and applies a risk-based framework to AI systems put on the EU market or used in the EU, regardless of where the provider is based. The Act sorts AI into four tiers. Unacceptable practices are banned. High-risk systems carry full conformity assessment obligations. Limited-risk systems need transparency. Minimal-risk systems have no AI Act obligations.

The risk tier defines the workload. Unacceptable-risk practices, the eight categories in Article 5, are not a list any mainstream SME use case sits in. They cover subliminal manipulation, exploitation of vulnerability, untargeted facial-recognition database scraping, real-time biometric identification in public spaces for law enforcement, social scoring, emotion recognition in workplace or education settings, and a small number of similar applications. These have been enforceable since 2 February 2025.

High-risk systems are the heavily regulated category that is allowed in the EU market. They are the Annex III use cases (biometrics, critical infrastructure, education, employment, essential services like credit and insurance, law enforcement, migration, justice) plus AI used as a safety component of products already regulated under EU law. Limited-risk systems are the everyday SME territory, chatbots, content generators, virtual assistants, meeting transcription tools, and similar productivity AI. Article 50 applies. Minimal-risk systems, the rest, have no AI Act obligation beyond general EU law (GDPR, consumer protection).

Why does the EU AI Act matter for your business?

It matters because the territorial scope is broad, the deadlines are real, and the penalty framework has teeth. Article 2 brings any provider or deployer into scope if the AI system serves EU users or its output is used in the EU. A UK firm with EU clients is covered on the same logic that brought UK firms under GDPR. The enforcement infrastructure went live in 2025 and is now active.

The penalties are tiered to the severity of the violation, with SME relief built in. Prohibited practices carry fines up to 35 million euro or seven per cent of global annual turnover, whichever is higher. High-risk non-compliance carries up to 15 million or three per cent. Article 99 caps SME penalties at the lower of the percentage or absolute amount applied to larger firms, and authorities must consider cooperation, intent, and self-reporting.

The cost of getting this wrong is rarely catastrophic for a service-led SME, but it is also not theoretical. National market surveillance authorities have full enforcement powers from August 2026. Customers who learn they were interacting with AI without disclosure also form a different view of the firm than customers who were told upfront.

Where will you actually meet the EU AI Act?

You will meet it in four places. First, in any customer-facing chatbot or virtual assistant your firm runs on its own site or inside client deliverables, where Article 50 requires clear disclosure that the user is interacting with AI. Second, in AI-generated content your firm produces or publishes, where synthetic text, audio, images, and video must be marked in a machine-readable way as artificially generated.

Third, in any AI system that crosses into Annex III territory. The categories most likely to catch a UK or EU SME are employment (recruitment screening, performance evaluation, workforce management decisions affecting individuals), essential services (credit scoring, insurance risk, healthcare access decisions), and education (admissions, learning evaluation). If your firm builds or operates AI that makes or substantially influences decisions in these domains, the system is high-risk and carries the full Annex III obligation set from 2 December 2027.

Fourth, in your supply chain. If you deploy a general-purpose AI model from another provider (GPT, Claude, Mistral, or similar) without modifying it substantially, the GPAI provider carries the model-level obligations and your obligations sit on how you deploy it. If you fine-tune a model materially for your own use case, you may become a provider of the modified model and inherit GPAI provider obligations. Most SMEs sit on the deployer side of this line.

When to ask versus when to ignore

Ask when your AI touches Annex III territory, when your firm operates a chatbot or generates synthetic content for EU users, when you serve customers in any EU member state, or when you are about to commit to a vendor offering AI built on a general-purpose model. Ignore the temptation to treat every published commentary as binding on a 20-person firm.

A useful filter is the five-question test for your own use case. One, are you a provider or a deployer (or both)? You build the system, you are a provider. You use someone else’s system, you are a deployer. Two, does your use case fall into one of the eight Annex III categories? If no, high-risk does not apply. Three, does your AI interact with humans, generate synthetic content, or categorise biometric data? If yes, Article 50 applies from August 2026. Four, are you using a GPAI model unchanged, or fine-tuning it substantially? If unchanged, focus on deployment obligations. Five, do you have EU users? If yes, the Act applies regardless of where you are based.

If the answers are limited-risk, deployer, no Annex III exposure, EU users present, the workload is real but manageable. Plan Article 50 disclosures and machine-readable content marking before 2 August 2026. Document the inventory. If any answer pushes you into high-risk, that is the point to bring in a specialist solicitor and look seriously at the EU regulatory sandbox programme. SMEs get priority and free access, and the sandbox is designed for exactly this kind of pre-launch testing.

This post is the SME-action follow-on to the conceptual explainer at what is the EU AI Act. Read that first if the four-tier framework or the GPAI distinction is new. Read this one when you are ready to work out what to do about it.

The Act sits inside a wider cluster on AI risk, trust, and governance for SMEs. The pillar post is AI risk and governance for owner-operated businesses, the structural read on what governance looks like at this scale. For the surrounding regulatory picture, the UK pro-innovation regulatory pivot and the US patchwork cover the other two jurisdictions UK SMEs commonly serve. For firms operating across more than one of these, the multi-jurisdiction AI compliance post is the integration view.

For internal practice, read the minimum viable AI policy for a small business and the audit trail an SME actually needs. For the disclosure conversation Article 50 is forcing, disclosing AI use to customers covers the practical wording.

The cluster does not replace specialist legal advice on your specific situation. The deadlines are firm, the scope is broad, and the detail in Annex III and Article 50 has edge cases worth a properly qualified solicitor’s time. If you want to talk through where your firm sits in the four tiers and what a proportionate response looks like at your scale, book a conversation.

Sources

- European Commission. EU Artificial Intelligence Act implementation timeline, the official portal tracking every staged enforcement date and recent Omnibus changes. https://artificialintelligenceact.eu/implementation-timeline/ - European Commission. Article 50, Transparency obligations for providers and deployers of certain AI systems. https://artificialintelligenceact.eu/article/50/ - European Commission. Article 22, Authorised representatives of providers of high-risk AI systems. https://artificialintelligenceact.eu/article/22/ - European Commission. Article 99, Penalties under the EU AI Act, the tiered penalty framework with SME-specific relief provisions. https://artificialintelligenceact.eu/article/99/ - European Commission. Annex III, the eight high-risk AI use-case categories that determine when full high-risk obligations apply. https://artificialintelligenceact.eu/annex/3/ - European Commission. Small Businesses Guide to the AI Act, the official guide covering sandboxes, simplified documentation, and SME support provisions. https://artificialintelligenceact.eu/small-businesses-guide-to-the-ai-act/ - DLA Piper (2025). The latest wave of obligations under the EU AI Act, covering the August 2025 GPAI provider obligations and governance structures. https://www.dlapiper.com/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act - Modulos (2026). What the EU AI Omnibus deal changes for the AI Act, the May 2026 agreement that moved high-risk standalone obligations to 2 December 2027 and high-risk-in-regulated-products to 2 August 2028. https://www.modulos.ai/blog/eu-ai-act-omnibus-deal/ - Pinsent Masons. Guide to high-risk AI systems under the EU AI Act, the practical reference for working out whether your Annex III use case is in scope. https://www.pinsentmasons.com/out-law/guides/guide-to-high-risk-ai-systems-under-the-eu-ai-act - European Commission Directorate-General for Communication Networks, Content and Technology. AI Act governance and enforcement, the official breakdown of the AI Office, national market surveillance authorities, and the European AI Board. https://digital-strategy.ec.europa.eu/en/policies/ai-act-governance-and-enforcement

Frequently asked questions

Does the EU AI Act apply to my UK business if I have no EU office?

Yes, if your AI system serves EU users or its output is used in the EU. Article 2 covers any provider placing an AI system on the EU market or putting it into service in the EU, and any provider or deployer in a third country whose AI output is used in the EU. The territorial reach is modelled on the GDPR. The fact that your servers and incorporation are in the UK does not take you out of scope.

I run a chatbot for EU customers. What do I actually have to do by 2 August 2026?

Two things. First, users must be told they are interacting with AI before the conversation starts, in clear language, not buried in a footer. Second, any synthetic content the system generates (text, audio, images, video) must be marked in a machine-readable way as artificially generated. The marking must be detectable by automated systems, not just visible to humans. Vague phrasing like "powered by AI" does not satisfy the first obligation.

Do I need to appoint an EU authorised representative?

Only if you are a non-EU provider of a high-risk AI system. Article 22 requires a formal written mandate with a legal entity established in the EU. For a UK firm offering a limited-risk chatbot or content generator to EU customers, no authorised representative is required. For a UK HR-tech firm offering recruitment-screening AI, or a UK fintech offering credit-decisioning AI, a representative is mandatory before placing the system on the EU market.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation