The 'we're too small to be regulated' mistake on AI

An MD and a commercial lawyer at a meeting table looking at a laptop, the lawyer pointing at the screen, two coffee cups on the table
TL;DR

The 'we're too small to be regulated' assumption is a structural error, not a defensible reading of the regulatory record. ICO has issued fines and enforcement notices to small and medium businesses. FCA names AI governance as an inspection priority for regulated firms regardless of size. FTC pursues unsubstantiated AI claims, including UK firms with US customers. EU AI Act has extraterritorial reach against UK SMEs with any EU audience. Three governance steps (policy, classification, register) cover most of the exposure.

Key takeaways

- The ICO enforces UK GDPR against SMEs as well as BA-scale organisations. Article 35 DPIA triggers do not have a size threshold. - The FCA names AI governance as an inspection priority for regulated SMEs. Material outsourcing of AI to a vendor triggers SUP notification. - The FTC pursues unsubstantiated AI claims and AI-driven discriminatory outcomes for any organisation serving US customers, including UK firms. - The EU AI Act came into force June 2024. Limited-risk transparency obligations apply to UK chatbots and AI-generated content seen by EU users. Enforcement is starting 2025-2026. - The Samsung 2023 ChatGPT leak is the SME-scale parallel: identical failure mode (no policy, no classification, no vendor vetting), consequences scale down only modestly. - Three governance steps cover most of the exposure: a 2-3 page policy with named allowed/forbidden categories, a four-tier data classification rule, a one-page risk register with monthly review.

The MD of a 19-person specialist IT firm in conversation with his commercial lawyer about a renewed cyber insurance policy. The lawyer asks: “What’s your AI governance position?” The MD shrugs. “We’re too small for that to matter.” The lawyer pauses. “When did you last check the ICO enforcement register?” The MD has not. The lawyer pulls up two enforcement notices issued in the last 18 months against organisations under 50 staff. Neither is for AI specifically. Both are for governance failures the MD’s firm could replicate today.

The exchange names something most SME owners are still operating on. The assumption is that regulators target large organisations with deep pockets and that small firms fly under the radar. The empirical record points the other way. The ICO, the FCA, the FTC, and the EU AI Act enforcement schedule all hit SMEs that lack baseline governance, including SMEs that thought they were too small to be reachable.

What does the ICO actually do at SME scale?

The ICO publishes its enforcement register openly. Most public attention focuses on large fines (British Airways at £20 million, Marriott at £18.4 million), which is partly why the size-misconception persists. Underneath the headline cases, the register lists fines, enforcement notices, and warnings issued to small and medium businesses across years. The pattern is consistent: governance failures that exposed customer data, automated decision-making without transparency, retention beyond lawful purpose.

For AI specifically, the ICO has published the AI risk toolkit and detailed guidance on AI and personal data. Article 35 DPIA triggers do not have a size threshold. Article 22 automated-decision rules apply equally to a 14-person firm and a 14,000-person firm. The ICO has signalled inspection priorities around AI in children’s data and around profiling generally; both contexts touch SMEs in digital marketing, education, and online services.

Where does the FCA fit for regulated SMEs?

The FCA’s position on AI in regulated firms is that AI does not reduce the firm’s obligations. SYSC requirements apply. Model risk management expectations apply. Consumer Duty applies. Material outsourcing to AI vendors triggers SUP notification. None of these have a size threshold; a 12-person FCA-regulated firm faces the same expectations as a 1,200-person firm, sized for the firm’s circumstances.

For a small financial firm, this means that adopting AI for AML triage, suitability analysis, or credit decisions cannot be done quietly. The compliance question has to be asked, the documentation has to exist, and the FCA inspector who arrives 18 months later expects to see it. Fractional compliance support is the typical SME-scale answer if the firm does not have an internal compliance function.

Why does the FTC matter to a UK firm?

For any UK firm with US customers, the FTC enforcement appetite around AI is real. Section 5 of the FTC Act covers unfair or deceptive practices. The FTC has signalled willingness to pursue organisations making unsubstantiated AI claims (a tool claiming “99 percent accuracy” without evidence) and AI systems producing discriminatory outcomes (an AI hiring tool systematically disadvantaging a protected class). UK firms selling services to US customers are in scope.

The practical implication for a UK SME with even a small US customer base is to audit AI claims in marketing copy, in sales decks, and in client-facing materials. Claims about AI capabilities should be substantiated. AI systems making decisions about US individuals should be tested for bias before deployment.

What is the EU AI Act exposure for a UK SME?

The EU AI Act came into force June 2024. Enforcement schedules run through 2025-2026. The act has explicit extraterritorial reach: UK firms whose AI systems process EU users’ data, or whose AI-generated content is seen by EU users, fall under the act regardless of where the firm is based.

Three exposure points hit UK SMEs. Limited-risk transparency obligations: customer-facing chatbots that engage EU users must disclose AI involvement, AI-generated content that could mislead must be labelled. High-risk classifications: AI systems used in HR (CV screening, interview scoring), credit decisions, healthcare access, or eligibility for services sold to EU customers carry documentation, testing, and human oversight requirements. Prohibited categories: AI for mass surveillance or exploitation of vulnerable populations is prohibited regardless of where the AI is deployed.

The 2026 inspection question for a UK SME with EU customers will be: have you assessed your AI systems against the act’s risk tiers, and have you implemented the corresponding transparency or compliance measures.

What does the Samsung leak teach an SME?

The Samsung 2023 ChatGPT leak is often cited as a large-company cautionary tale. The lesson for SMEs is sharper than the size difference suggests. Samsung employees used free ChatGPT for confidential semiconductor work. The free tier trains on user inputs, so the proprietary information became training data and potentially accessible to other users. The governance gaps were ordinary: no policy on consumer AI tools, no data classification, no vetted commercial alternative.

Every part of that failure is reproducible at SME scale. A 14-person law firm with paralegals using free ChatGPT for client matter work is one bad day from an SME version of the Samsung incident. The consequences scale down (smaller fines, smaller market reaction), and the professional and contractual exposure with the firm’s clients is identical. Confidentiality obligations under the SRA, professional indemnity insurance, client engagement letter terms, all engage the same way.

What three governance steps cover most of the exposure?

A 2-3 page policy with named allowed and forbidden categories. A four-tier data classification rule (Public, Internal, Confidential, Restricted) that keeps confidential and Article 9 data out of unvetted tools. A one-page risk register with monthly review. Three documents, total length under 5 pages, maintained by the MD and the operations lead.

None of the three requires enterprise infrastructure. None of the three requires specialised software. All three reduce the regulator-encounter risk by roughly an order of magnitude relative to no governance, because they convert ambient unmanaged risk into named decisions that the firm has documented and acted on.

If you have been operating on the assumption that the regulators do not reach this far down the size scale, and the conversation with your commercial lawyer or insurance broker has just unsettled that, book a conversation.

Sources

- ICO enforcement register. https://ico.org.uk/action-weve-taken/enforcement/ - ICO AI guidance hub. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/ - FCA AI publication. https://www.fca.org.uk/publication/research/research-paper-machine-learning-uk-financial-services.pdf - FTC AI claims guidance. https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check - EU AI Act overview. https://artificialintelligenceact.eu/ - EU AI Act Article 50 transparency. https://artificialintelligenceact.eu/article/50/

Frequently asked questions

Does the ICO actually enforce against SMEs?

Yes. The published ICO enforcement register includes fines, enforcement notices, and warnings to small and medium businesses. The most-publicised cases (BA, Marriott) involve large firms because of the headline numbers, but the smaller cases happen routinely. Article 35 DPIA triggers and Article 22 automated-decision rules do not have a size threshold.

Why does the EU AI Act matter to a UK SME?

Extraterritorial reach. Limited-risk transparency obligations apply if any EU users see your AI system. UK firms running customer-facing chatbots that engage EU users must disclose AI involvement. UK firms generating AI imagery seen by EU customers may need to label it. UK firms selling AI-driven HR or credit tools to EU customers fall into high-risk classifications. Enforcement is starting 2025-2026.

What did the Samsung 2023 leak actually show?

Samsung employees used free ChatGPT to handle confidential semiconductor design work. ChatGPT's free tier trains on user inputs, so the proprietary information became part of the training data. The failure mode (no policy forbidding free public tools for confidential work, no data classification, no paid commercial alternative provided) is identical to what an SME does by default. Consequences scale down: smaller fines, smaller market reaction, but the same professional and contractual exposure with clients.

What three governance steps cover most of the exposure?

A 2-3 page policy with named allowed and forbidden categories. A four-tier data classification rule (Public, Internal, Confidential, Restricted) that keeps confidential and Article 9 data out of unvetted tools. A one-page risk register with monthly review. None of the three requires enterprise infrastructure. All three reduce the regulator-encounter risk by an order of magnitude.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation