A managing partner at a 70-staff UK accountancy practice forwarded me an article last month with the subject line, “is this real?”. The article said the EU AI Act was now operative and the recruitment-screening tool the firm had bought eighteen months earlier was high-risk under Annex III. Around 30 per cent of the firm’s clients were EU-based. The firm was running six AI tools quietly, including ChatGPT, the recruitment screener, a credit-decision API in client onboarding, an AI receptionist, and a document classifier.
He had three questions. Is the firm a provider or a deployer for each tool. What do the deployer obligations actually require. What is the realistic penalty exposure for a four million pound firm with EU clients. The honest answer to all three is that this is now operative law, the test of scope is not where the firm is registered but where the AI system or its output lands, and the work of compliance is procedural rather than technical.
What is the EU AI Act?
The EU AI Act is Regulation (EU) 2024/1689, the world’s first comprehensive law on artificial intelligence. It applies directly across all EU member states and sets binding rules for the design, development, deployment, and use of AI systems with any material connection to the European Union. It works on a risk-based model, the more harm an AI system can do to rights, safety, or economic interests, the more tightly it is regulated.
The Act does not regulate AI in the abstract. It targets specific actors along the AI value chain, providers and deployers being the two roles that matter for UK SMEs. Article 3 defines a provider as a person or firm that develops an AI system, or has one developed, and places it on the market or puts it into service under its own name. A deployer uses an AI system under its authority in the course of professional activity. UK service businesses using third-party AI tools are typically deployers rather than providers, and the obligations differ accordingly.
Why does it matter for UK businesses?
It matters because the Act’s reach is extraterritorial and the operative date has passed. Article 2 captures providers placing systems on the EU market, deployers established in the EU, and deployers outside the EU whose AI system outputs are used in the EU. UK incorporation provides no shield. The 2 August 2026 deadline activated deployer obligations for high-risk systems and Article 50 transparency rules for limited-risk systems, and national competent authorities are enforcing now.
UK legal commentary has been blunt about this. Farrer & Co warn that UK businesses must “tread carefully”, and Trowers & Hamlins state compliance is “unavoidable” for firms trading in the EU or supplying AI into European supply chains. The penalty framework also scales by global turnover, not by EU revenue, which means a UK SME with a small EU client base can still face fines pegged to its full annual turnover. The risk is not theoretical.
Article 99 sets three penalty tiers. Up to 35 million euro or 7 per cent of global turnover for Article 5 prohibited practices, deliberately above GDPR’s 4 per cent ceiling. Up to 15 million euro or 3 per cent for breaches of high-risk system obligations under Articles 16, 22 to 26 and the GPAI rules. Up to 7.5 million euro or 1 per cent for procedural failures such as incomplete information to authorities. Article 99 tells regulators to consider SME viability, but the statutory ceiling still applies. For a four million pound firm, a 7 per cent breach exposes around 280,000 pounds.
Where will you actually meet it?
You will meet it inside the AI tools your firm has already adopted, often without anyone calling them AI. The eight Annex III high-risk categories cover biometrics, critical infrastructure, education, employment and worker management, access to essential services and benefits, law enforcement, migration and border, and justice and democratic process. For a UK service SME the live exposure points sit in recruitment, credit and insurance, and anything profiling individuals at scale.
If your firm runs a CV scorer or recruitment screener, a loan-eligibility API, or an underwriting model, that system is high-risk by default and triggers the full deployer obligation set. Beyond high-risk, two other tiers will turn up in the wild. Tier 3 limited-risk applies to chatbots, AI receptionists, and any tool generating synthetic content, and Article 50 requires that users know they are interacting with AI and that AI-generated content is marked as such.
Tier 1 unacceptable practices are banned outright. The list includes workplace and education emotion recognition, biometric categorisation by sensitive attributes, social scoring, and predictive criminal-risk profiling based on personality traits alone. None of these are common in a UK accountancy or marketing practice, but the workplace emotion-recognition ban catches some HR analytics tools that promise mood or engagement scoring from camera feeds. The tools sitting at minimal risk, such as spam filters and AI-enabled video games, face no specific obligations under the Act.
When to act, and when it does not apply
Act now if your firm has any material EU exposure, runs any tool that fits an Annex III category, or operates a chatbot or AI content generator that touches end users. The five-step starter plan that gets a one to ten million pound UK SME to a defensible position in three to six months is concrete and procedural. The work is not technical, it is governance work, done in sequence and documented as you go.
The five steps. First, inventory every AI system in use including shadow AI, naming what each does, what data it processes, who it affects, and whether the output touches the EU. Second, classify each system by tier using Article 6 and Annex III, documenting the rationale. Third, for each high-risk system, name a trained human overseer with authority to stop the system, set up monitoring and log retention, run a documented risk assessment, and provide transparency to affected individuals. Fourth, for limited-risk systems add Article 50 disclosure language to chatbots, content generators, and deepfake tools. Fifth, review vendor contracts to confirm the provider has done conformity assessment, maintains technical documentation, and will report serious incidents, requesting an addendum where the contract is silent.
The Act does not apply, in practical terms, to a purely UK firm with no EU clients running only minimal-risk tools. A local domestic accountancy with no EU exposure, using spam filters and a UK-only document classifier with no profiling, sits outside the active obligation set. There is also a deadline question worth tracking. The European Commission’s Digital Omnibus on AI proposes extending high-risk application to 2 December 2027, and to 2 August 2028 for high-risk systems embedded in regulated products. As of May 2026 those amendments are not adopted, the operative baseline remains 2 August 2026, and SMEs should not bet the firm on a proposed extension.
Related concepts
Several adjacent ideas in AI governance connect directly to the Act. Data residency sits underneath Article 26 deployer obligations, because if your vendor processes EU resident data outside the EU you have a Chapter V transfer question alongside the AI Act question. The Article 22 human-review rule under UK GDPR, the DPIA framework, and explainable AI all sit alongside the Act’s transparency and oversight requirements rather than replacing them.
Audit trail and log retention is the operational record the Act expects you to keep for high-risk systems, and is the first thing a regulator will ask for in an investigation. The UK regulatory frame is sector-led rather than horizontal. The 2023 White Paper distributes responsibility across the ICO, FCA, MHRA, CMA, and EHRC, and the Data (Use and Access) Act 2025 adjusts automated decision-making rules in UK law. None of that displaces the EU AI Act for UK firms with EU exposure, and many UK firms now treat the EU baseline as the floor because it is more prescriptive. If you want a single governing standard for cross-border AI use in 2026, the EU AI Act is it.
If your firm is sitting where the accountancy practice was, six AI tools in use and around 30 per cent EU exposure, the path forward is the five-step starter plan run alongside qualified legal advice. Book a conversation if you want a second pair of eyes on the inventory or the classification work.



