The MD of a 19-person specialist IT firm in conversation with his commercial lawyer about a renewed cyber insurance policy. The lawyer asks: “What’s your AI governance position?” The MD shrugs. “We’re too small for that to matter.” The lawyer pauses. “When did you last check the ICO enforcement register?” The MD has not. The lawyer pulls up two enforcement notices issued in the last 18 months against organisations under 50 staff. Neither is for AI specifically. Both are for governance failures the MD’s firm could replicate today.
The exchange names something most SME owners are still operating on. The assumption is that regulators target large organisations with deep pockets and that small firms fly under the radar. The empirical record points the other way. The ICO, the FCA, the FTC, and the EU AI Act enforcement schedule all hit SMEs that lack baseline governance, including SMEs that thought they were too small to be reachable.
What does the ICO actually do at SME scale?
The ICO publishes its enforcement register openly. Most public attention focuses on large fines (British Airways at £20 million, Marriott at £18.4 million), which is partly why the size-misconception persists. Underneath the headline cases, the register lists fines, enforcement notices, and warnings issued to small and medium businesses across years. The pattern is consistent: governance failures that exposed customer data, automated decision-making without transparency, retention beyond lawful purpose.
For AI specifically, the ICO has published the AI risk toolkit and detailed guidance on AI and personal data. Article 35 DPIA triggers do not have a size threshold. Article 22 automated-decision rules apply equally to a 14-person firm and a 14,000-person firm. The ICO has signalled inspection priorities around AI in children’s data and around profiling generally; both contexts touch SMEs in digital marketing, education, and online services.
Where does the FCA fit for regulated SMEs?
The FCA’s position on AI in regulated firms is that AI does not reduce the firm’s obligations. SYSC requirements apply. Model risk management expectations apply. Consumer Duty applies. Material outsourcing to AI vendors triggers SUP notification. None of these have a size threshold; a 12-person FCA-regulated firm faces the same expectations as a 1,200-person firm, sized for the firm’s circumstances.
For a small financial firm, this means that adopting AI for AML triage, suitability analysis, or credit decisions cannot be done quietly. The compliance question has to be asked, the documentation has to exist, and the FCA inspector who arrives 18 months later expects to see it. Fractional compliance support is the typical SME-scale answer if the firm does not have an internal compliance function.
Why does the FTC matter to a UK firm?
For any UK firm with US customers, the FTC enforcement appetite around AI is real. Section 5 of the FTC Act covers unfair or deceptive practices. The FTC has signalled willingness to pursue organisations making unsubstantiated AI claims (a tool claiming “99 percent accuracy” without evidence) and AI systems producing discriminatory outcomes (an AI hiring tool systematically disadvantaging a protected class). UK firms selling services to US customers are in scope.
The practical implication for a UK SME with even a small US customer base is to audit AI claims in marketing copy, in sales decks, and in client-facing materials. Claims about AI capabilities should be substantiated. AI systems making decisions about US individuals should be tested for bias before deployment.
What is the EU AI Act exposure for a UK SME?
The EU AI Act came into force June 2024. Enforcement schedules run through 2025-2026. The act has explicit extraterritorial reach: UK firms whose AI systems process EU users’ data, or whose AI-generated content is seen by EU users, fall under the act regardless of where the firm is based.
Three exposure points hit UK SMEs. Limited-risk transparency obligations: customer-facing chatbots that engage EU users must disclose AI involvement, AI-generated content that could mislead must be labelled. High-risk classifications: AI systems used in HR (CV screening, interview scoring), credit decisions, healthcare access, or eligibility for services sold to EU customers carry documentation, testing, and human oversight requirements. Prohibited categories: AI for mass surveillance or exploitation of vulnerable populations is prohibited regardless of where the AI is deployed.
The 2026 inspection question for a UK SME with EU customers will be: have you assessed your AI systems against the act’s risk tiers, and have you implemented the corresponding transparency or compliance measures.
What does the Samsung leak teach an SME?
The Samsung 2023 ChatGPT leak is often cited as a large-company cautionary tale. The lesson for SMEs is sharper than the size difference suggests. Samsung employees used free ChatGPT for confidential semiconductor work. The free tier trains on user inputs, so the proprietary information became training data and potentially accessible to other users. The governance gaps were ordinary: no policy on consumer AI tools, no data classification, no vetted commercial alternative.
Every part of that failure is reproducible at SME scale. A 14-person law firm with paralegals using free ChatGPT for client matter work is one bad day from an SME version of the Samsung incident. The consequences scale down (smaller fines, smaller market reaction), and the professional and contractual exposure with the firm’s clients is identical. Confidentiality obligations under the SRA, professional indemnity insurance, client engagement letter terms, all engage the same way.
What three governance steps cover most of the exposure?
A 2-3 page policy with named allowed and forbidden categories. A four-tier data classification rule (Public, Internal, Confidential, Restricted) that keeps confidential and Article 9 data out of unvetted tools. A one-page risk register with monthly review. Three documents, total length under 5 pages, maintained by the MD and the operations lead.
None of the three requires enterprise infrastructure. None of the three requires specialised software. All three reduce the regulator-encounter risk by roughly an order of magnitude relative to no governance, because they convert ambient unmanaged risk into named decisions that the firm has documented and acted on.
If you have been operating on the assumption that the regulators do not reach this far down the size scale, and the conversation with your commercial lawyer or insurance broker has just unsettled that, book a conversation.



