What is the EU AI Act? Why it matters for UK businesses

Three colleagues at a meeting room table reviewing a printed regulation document with sticky tabs
TL;DR

The EU AI Act is the world's first comprehensive AI law, and as of 2 August 2026 the bulk of it is in force. UK incorporation provides no shield, the test is whether your AI system or its output lands in the EU. The Act sorts systems into four risk tiers, splits obligations between providers and deployers, and carries fines up to 35 million euro or 7 per cent of global turnover. The right move is to assume scope, inventory your AI tools, classify each one, name a human overseer for high-risk uses, and review vendor contracts.

Key takeaways

- The EU AI Act applies to UK firms whose AI system or its output is used in the EU. Article 2 captures providers placing systems on the EU market, EU-based deployers, and crucially deployers outside the EU whose outputs land there. UK incorporation is not a shield. - Four risk tiers set the obligations. Tier 1 unacceptable practices are banned outright, Tier 2 high-risk systems in the eight Annex III categories carry heavy controls, Tier 3 limited-risk systems need Article 50 transparency disclosures, Tier 4 minimal-risk faces no obligations. - UK service SMEs are typically deployers rather than providers. Provider obligations include conformity assessment and CE marking, deployer obligations are lighter but real, named human oversight, log retention, transparency to affected individuals, and serious-incident reporting. Substantial fine-tuning or rebranding can flip a deployer into provider status. - Penalties are tiered. Up to 35 million euro or 7 per cent of global turnover for Article 5 prohibited practices, up to 15 million euro or 3 per cent for high-risk obligation breaches, up to 7.5 million euro or 1 per cent for procedural failures. Article 99 directs regulators to consider SME viability, but the statutory ceiling still applies. - A defensible 2026 starter plan has five steps. Inventory all AI in use, classify each system by Article 6 and Annex III, implement four core controls for high-risk uses, add Article 50 disclosure to chatbots and generative content tools, and review vendor contracts for documented provider compliance.

A managing partner at a 70-staff UK accountancy practice forwarded me an article last month with the subject line, “is this real?”. The article said the EU AI Act was now operative and the recruitment-screening tool the firm had bought eighteen months earlier was high-risk under Annex III. Around 30 per cent of the firm’s clients were EU-based. The firm was running six AI tools quietly, including ChatGPT, the recruitment screener, a credit-decision API in client onboarding, an AI receptionist, and a document classifier.

He had three questions. Is the firm a provider or a deployer for each tool. What do the deployer obligations actually require. What is the realistic penalty exposure for a four million pound firm with EU clients. The honest answer to all three is that this is now operative law, the test of scope is not where the firm is registered but where the AI system or its output lands, and the work of compliance is procedural rather than technical.

What is the EU AI Act?

The EU AI Act is Regulation (EU) 2024/1689, the world’s first comprehensive law on artificial intelligence. It applies directly across all EU member states and sets binding rules for the design, development, deployment, and use of AI systems with any material connection to the European Union. It works on a risk-based model, the more harm an AI system can do to rights, safety, or economic interests, the more tightly it is regulated.

The Act does not regulate AI in the abstract. It targets specific actors along the AI value chain, providers and deployers being the two roles that matter for UK SMEs. Article 3 defines a provider as a person or firm that develops an AI system, or has one developed, and places it on the market or puts it into service under its own name. A deployer uses an AI system under its authority in the course of professional activity. UK service businesses using third-party AI tools are typically deployers rather than providers, and the obligations differ accordingly.

Why does it matter for UK businesses?

It matters because the Act’s reach is extraterritorial and the operative date has passed. Article 2 captures providers placing systems on the EU market, deployers established in the EU, and deployers outside the EU whose AI system outputs are used in the EU. UK incorporation provides no shield. The 2 August 2026 deadline activated deployer obligations for high-risk systems and Article 50 transparency rules for limited-risk systems, and national competent authorities are enforcing now.

UK legal commentary has been blunt about this. Farrer & Co warn that UK businesses must “tread carefully”, and Trowers & Hamlins state compliance is “unavoidable” for firms trading in the EU or supplying AI into European supply chains. The penalty framework also scales by global turnover, not by EU revenue, which means a UK SME with a small EU client base can still face fines pegged to its full annual turnover. The risk is not theoretical.

Article 99 sets three penalty tiers. Up to 35 million euro or 7 per cent of global turnover for Article 5 prohibited practices, deliberately above GDPR’s 4 per cent ceiling. Up to 15 million euro or 3 per cent for breaches of high-risk system obligations under Articles 16, 22 to 26 and the GPAI rules. Up to 7.5 million euro or 1 per cent for procedural failures such as incomplete information to authorities. Article 99 tells regulators to consider SME viability, but the statutory ceiling still applies. For a four million pound firm, a 7 per cent breach exposes around 280,000 pounds.

Where will you actually meet it?

You will meet it inside the AI tools your firm has already adopted, often without anyone calling them AI. The eight Annex III high-risk categories cover biometrics, critical infrastructure, education, employment and worker management, access to essential services and benefits, law enforcement, migration and border, and justice and democratic process. For a UK service SME the live exposure points sit in recruitment, credit and insurance, and anything profiling individuals at scale.

If your firm runs a CV scorer or recruitment screener, a loan-eligibility API, or an underwriting model, that system is high-risk by default and triggers the full deployer obligation set. Beyond high-risk, two other tiers will turn up in the wild. Tier 3 limited-risk applies to chatbots, AI receptionists, and any tool generating synthetic content, and Article 50 requires that users know they are interacting with AI and that AI-generated content is marked as such.

Tier 1 unacceptable practices are banned outright. The list includes workplace and education emotion recognition, biometric categorisation by sensitive attributes, social scoring, and predictive criminal-risk profiling based on personality traits alone. None of these are common in a UK accountancy or marketing practice, but the workplace emotion-recognition ban catches some HR analytics tools that promise mood or engagement scoring from camera feeds. The tools sitting at minimal risk, such as spam filters and AI-enabled video games, face no specific obligations under the Act.

When to act, and when it does not apply

Act now if your firm has any material EU exposure, runs any tool that fits an Annex III category, or operates a chatbot or AI content generator that touches end users. The five-step starter plan that gets a one to ten million pound UK SME to a defensible position in three to six months is concrete and procedural. The work is not technical, it is governance work, done in sequence and documented as you go.

The five steps. First, inventory every AI system in use including shadow AI, naming what each does, what data it processes, who it affects, and whether the output touches the EU. Second, classify each system by tier using Article 6 and Annex III, documenting the rationale. Third, for each high-risk system, name a trained human overseer with authority to stop the system, set up monitoring and log retention, run a documented risk assessment, and provide transparency to affected individuals. Fourth, for limited-risk systems add Article 50 disclosure language to chatbots, content generators, and deepfake tools. Fifth, review vendor contracts to confirm the provider has done conformity assessment, maintains technical documentation, and will report serious incidents, requesting an addendum where the contract is silent.

The Act does not apply, in practical terms, to a purely UK firm with no EU clients running only minimal-risk tools. A local domestic accountancy with no EU exposure, using spam filters and a UK-only document classifier with no profiling, sits outside the active obligation set. There is also a deadline question worth tracking. The European Commission’s Digital Omnibus on AI proposes extending high-risk application to 2 December 2027, and to 2 August 2028 for high-risk systems embedded in regulated products. As of May 2026 those amendments are not adopted, the operative baseline remains 2 August 2026, and SMEs should not bet the firm on a proposed extension.

Several adjacent ideas in AI governance connect directly to the Act. Data residency sits underneath Article 26 deployer obligations, because if your vendor processes EU resident data outside the EU you have a Chapter V transfer question alongside the AI Act question. The Article 22 human-review rule under UK GDPR, the DPIA framework, and explainable AI all sit alongside the Act’s transparency and oversight requirements rather than replacing them.

Audit trail and log retention is the operational record the Act expects you to keep for high-risk systems, and is the first thing a regulator will ask for in an investigation. The UK regulatory frame is sector-led rather than horizontal. The 2023 White Paper distributes responsibility across the ICO, FCA, MHRA, CMA, and EHRC, and the Data (Use and Access) Act 2025 adjusts automated decision-making rules in UK law. None of that displaces the EU AI Act for UK firms with EU exposure, and many UK firms now treat the EU baseline as the floor because it is more prescriptive. If you want a single governing standard for cross-border AI use in 2026, the EU AI Act is it.

If your firm is sitting where the accountancy practice was, six AI tools in use and around 30 per cent EU exposure, the path forward is the five-step starter plan run alongside qualified legal advice. Book a conversation if you want a second pair of eyes on the inventory or the classification work.

Sources

EU Artificial Intelligence Act (2024). Regulation (EU) 2024/1689, the canonical text and high-level summary. https://artificialintelligenceact.eu/the-act/ EU Artificial Intelligence Act (2024). Article 5, prohibited AI practices. https://artificialintelligenceact.eu/article/5/ EU Artificial Intelligence Act (2024). Annex III, the eight high-risk use case categories. https://artificialintelligenceact.eu/annex/3/ EU Artificial Intelligence Act (2024). Article 26, deployer obligations for high-risk systems. https://artificialintelligenceact.eu/article/26/ EU Artificial Intelligence Act (2024). Article 50, transparency obligations for limited-risk systems. https://artificialintelligenceact.eu/article/50/ EU Artificial Intelligence Act (2024). Article 99, the three-tier penalty framework. https://artificialintelligenceact.eu/article/99/ Farrer & Co (2024). The EU AI Act, what does it mean for UK organisations that use or provide AI systems. UK legal commentary on extraterritorial reach. https://www.farrer.co.uk/news-and-insights/the-eu-ai-act--what-does-it-mean-for-uk-organisations-that-use-or-provide-ai-systems/ Trowers & Hamlins (2026). Navigating the AI rulebook, UK regulation, global trends and what is next. UK firm commentary on unavoidability of EU AI Act compliance for cross-border firms. https://www.trowers.com/insights/2026/january/navigating-the-ai-rulebook-uk-regulation-global-trends-and-whats-next IAPP (2026). EU AI Act deployer evidence gaps SMEs will miss before 2 August 2026, on the operational gaps in SME compliance. https://iapp.org/news/a/eu-ai-act-deployer-evidence-gaps-smes-will-miss-before-2-aug-2026 Crowell Moring (2025). EU AI Act, GDPR and digital laws changes proposed, on the Digital Omnibus deadline-extension proposals. https://www.crowell.com/en/insights/client-alerts/eu-ai-act-gdpr-and-digital-laws-changes-proposed

Frequently asked questions

We are a UK firm and we do not have an EU office. Does the EU AI Act apply to us?

Probably yes. Article 2 captures providers placing AI systems on the EU market and deployers outside the EU whose AI system outputs are used in the EU. If you sell AI-enabled services to EU customers, run a tool whose output is consumed by EU residents, or process EU resident data through an AI system, you are in scope. UK firms with any cross-border client base should assume the Act applies and design compliance from there.

We only use ChatGPT and a few off-the-shelf vendor tools. Are we still on the hook?

Yes, as a deployer. Deployer obligations under Article 26 are real and non-delegable. You must use the system per the provider's instructions, name a trained human overseer with authority to stop the system, retain logs, monitor for risks, inform affected individuals, and report serious incidents. Your vendor's compliance does not absolve yours, the responsibility is joint.

What is the realistic penalty exposure for a four million pound turnover firm?

At the top tier, 7 per cent of global turnover for an Article 5 prohibited practice translates to about 280,000 pounds. At the high-risk obligation tier, 3 per cent gets you to about 120,000 pounds. Article 99 tells regulators to consider SME economic viability and apply the lower of the fixed amount or the percentage, but the ceiling is still material money for a typical UK service SME.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation