The owner of a 22-person UK marketing services firm has read three summaries of the EU AI Act this month. One told her she had nothing to worry about. One told her she might face fines of seven per cent of global turnover. The third told her she needed an EU-based legal representative by August. She serves clients in Dublin and Amsterdam, runs a chatbot on her own site, and uses generative tools in client work. She has no idea which of the three summaries to act on.
This is the position many UK and EU-based owners are sitting in. The Act is real, the deadlines are dated, and the scope is broader than most SMEs assume. The actual obligations on a typical service-led SME are narrower than the headline noise suggested, and they are knowable. The point of this post is to give you the precise version, deadline by deadline, so you can decide which conversations to have on Monday and which to leave for next quarter.
This is not legal advice. The Act has edge cases that warrant a specialist solicitor for any decision that matters. What follows is the orientation map.
What is the EU AI Act?
The EU AI Act is the world’s first comprehensive AI regulation. It came into force on 1 August 2024 and applies a risk-based framework to AI systems put on the EU market or used in the EU, regardless of where the provider is based. The Act sorts AI into four tiers. Unacceptable practices are banned. High-risk systems carry full conformity assessment obligations. Limited-risk systems need transparency. Minimal-risk systems have no AI Act obligations.
The risk tier defines the workload. Unacceptable-risk practices, the eight categories in Article 5, are not a list any mainstream SME use case sits in. They cover subliminal manipulation, exploitation of vulnerability, untargeted facial-recognition database scraping, real-time biometric identification in public spaces for law enforcement, social scoring, emotion recognition in workplace or education settings, and a small number of similar applications. These have been enforceable since 2 February 2025.
High-risk systems are the heavily regulated category that is allowed in the EU market. They are the Annex III use cases (biometrics, critical infrastructure, education, employment, essential services like credit and insurance, law enforcement, migration, justice) plus AI used as a safety component of products already regulated under EU law. Limited-risk systems are the everyday SME territory, chatbots, content generators, virtual assistants, meeting transcription tools, and similar productivity AI. Article 50 applies. Minimal-risk systems, the rest, have no AI Act obligation beyond general EU law (GDPR, consumer protection).
Why does the EU AI Act matter for your business?
It matters because the territorial scope is broad, the deadlines are real, and the penalty framework has teeth. Article 2 brings any provider or deployer into scope if the AI system serves EU users or its output is used in the EU. A UK firm with EU clients is covered on the same logic that brought UK firms under GDPR. The enforcement infrastructure went live in 2025 and is now active.
The penalties are tiered to the severity of the violation, with SME relief built in. Prohibited practices carry fines up to 35 million euro or seven per cent of global annual turnover, whichever is higher. High-risk non-compliance carries up to 15 million or three per cent. Article 99 caps SME penalties at the lower of the percentage or absolute amount applied to larger firms, and authorities must consider cooperation, intent, and self-reporting.
The cost of getting this wrong is rarely catastrophic for a service-led SME, but it is also not theoretical. National market surveillance authorities have full enforcement powers from August 2026. Customers who learn they were interacting with AI without disclosure also form a different view of the firm than customers who were told upfront.
Where will you actually meet the EU AI Act?
You will meet it in four places. First, in any customer-facing chatbot or virtual assistant your firm runs on its own site or inside client deliverables, where Article 50 requires clear disclosure that the user is interacting with AI. Second, in AI-generated content your firm produces or publishes, where synthetic text, audio, images, and video must be marked in a machine-readable way as artificially generated.
Third, in any AI system that crosses into Annex III territory. The categories most likely to catch a UK or EU SME are employment (recruitment screening, performance evaluation, workforce management decisions affecting individuals), essential services (credit scoring, insurance risk, healthcare access decisions), and education (admissions, learning evaluation). If your firm builds or operates AI that makes or substantially influences decisions in these domains, the system is high-risk and carries the full Annex III obligation set from 2 December 2027.
Fourth, in your supply chain. If you deploy a general-purpose AI model from another provider (GPT, Claude, Mistral, or similar) without modifying it substantially, the GPAI provider carries the model-level obligations and your obligations sit on how you deploy it. If you fine-tune a model materially for your own use case, you may become a provider of the modified model and inherit GPAI provider obligations. Most SMEs sit on the deployer side of this line.
When to ask versus when to ignore
Ask when your AI touches Annex III territory, when your firm operates a chatbot or generates synthetic content for EU users, when you serve customers in any EU member state, or when you are about to commit to a vendor offering AI built on a general-purpose model. Ignore the temptation to treat every published commentary as binding on a 20-person firm.
A useful filter is the five-question test for your own use case. One, are you a provider or a deployer (or both)? You build the system, you are a provider. You use someone else’s system, you are a deployer. Two, does your use case fall into one of the eight Annex III categories? If no, high-risk does not apply. Three, does your AI interact with humans, generate synthetic content, or categorise biometric data? If yes, Article 50 applies from August 2026. Four, are you using a GPAI model unchanged, or fine-tuning it substantially? If unchanged, focus on deployment obligations. Five, do you have EU users? If yes, the Act applies regardless of where you are based.
If the answers are limited-risk, deployer, no Annex III exposure, EU users present, the workload is real but manageable. Plan Article 50 disclosures and machine-readable content marking before 2 August 2026. Document the inventory. If any answer pushes you into high-risk, that is the point to bring in a specialist solicitor and look seriously at the EU regulatory sandbox programme. SMEs get priority and free access, and the sandbox is designed for exactly this kind of pre-launch testing.
Related concepts
This post is the SME-action follow-on to the conceptual explainer at what is the EU AI Act. Read that first if the four-tier framework or the GPAI distinction is new. Read this one when you are ready to work out what to do about it.
The Act sits inside a wider cluster on AI risk, trust, and governance for SMEs. The pillar post is AI risk and governance for owner-operated businesses, the structural read on what governance looks like at this scale. For the surrounding regulatory picture, the UK pro-innovation regulatory pivot and the US patchwork cover the other two jurisdictions UK SMEs commonly serve. For firms operating across more than one of these, the multi-jurisdiction AI compliance post is the integration view.
For internal practice, read the minimum viable AI policy for a small business and the audit trail an SME actually needs. For the disclosure conversation Article 50 is forcing, disclosing AI use to customers covers the practical wording.
The cluster does not replace specialist legal advice on your specific situation. The deadlines are firm, the scope is broad, and the detail in Annex III and Article 50 has edge cases worth a properly qualified solicitor’s time. If you want to talk through where your firm sits in the four tiers and what a proportionate response looks like at your scale, book a conversation.



