When you legally need a DPIA for AI, with five worked examples

An MD and a compliance adviser at a meeting room table looking at a laptop together, the adviser pointing at the screen, a notepad on the table
TL;DR

UK GDPR Article 35 makes a Data Protection Impact Assessment legally required when processing poses a high risk to rights and freedoms. The ICO's published guidance suggests systematic AI processing of personal data is often the canonical Article 35 trigger. Most SMEs deploying AI for customer or client work have triggered the requirement and not realised. Five worked examples (law firm, healthcare, e-commerce, manufacturing, brainstorming use) show where the line falls.

Key takeaways

- UK GDPR Article 35 requires a DPIA whenever processing is "likely to result in a high risk to the rights and freedoms of natural persons". The ICO position is that systematic AI processing of personal data is often in this category. - The SME-scale DPIA fits 4-6 pages, uses the ICO template, gets signed by the MD, and is reviewed annually or when the deployment changes materially. - The decision rule: three questions. Does the AI process personal data? Is the processing systematic? Does it pose high risk (large-scale profiling, special category data, automated decisions, vulnerable groups)? Two yeses and a probable third trigger Article 35. - Failing to complete a required DPIA is itself a UK GDPR breach. Practical exposure if a data incident later surfaces and the DPIA was never done. - Five worked examples: law firm with client matter (DPIA required); healthcare with clinical documentation (DPIA required, Article 9 overlay); e-commerce personalisation (DPIA likely required at scale); manufacturing summarising internal docs (DPIA not required, no personal data); brainstorming with no personal data input (DPIA not required).

The MD of a 16-person professional services firm is six months into a rolled-out AI-assisted client management tool that ingests inbound emails, schedules calls, and drafts client follow-ups. The firm’s compliance adviser, brought in for an unrelated audit, asks: “Did you run a DPIA before deploying this?” The MD blinks. He has heard the term. He is uncertain whether he was supposed to do one. The compliance adviser explains: yes, almost certainly, and the firm has been operating without one for six months.

This is one of the most common SME governance gaps on AI. The Data Protection Impact Assessment is a legal requirement under UK GDPR Article 35 in many AI deployments, and most SME owners do not realise their deployment triggered it. The good news is that the SME-scale version of the document fits 4-6 pages and uses a free ICO template. Running it now is much cheaper than discovering the gap during an incident.

UK GDPR Article 35 requires a DPIA when processing is “likely to result in a high risk to the rights and freedoms of natural persons”. The ICO has published guidance and a list of processing types presumed to be high-risk; systematic AI processing of personal data appears in those categories. The Article 35 obligation has no size threshold; a 12-person firm and a 12,000-person firm face the same requirement when processing meets the criteria.

The ICO position has been stable since the AI guidance updates in 2023-2024. Where AI processing is systematic and involves personal data, expect a DPIA to be needed. Where the processing involves special category data under Article 9, the DPIA threshold is essentially automatic.

What does a DPIA actually contain?

Four sections. A description of the processing (what AI tool, what data, what purpose, who is involved). An assessment of necessity and proportionality (why the AI is the right means). An identification of risks to individuals (what could go wrong, how severe). A list of mitigation measures (what controls reduce each risk, who owns each control). The ICO template covers all four; the SME version fills 4-6 pages.

The MD signs the document. It lives alongside the AI policy and the risk register on the firm’s shared drive. Reviewed annually, or sooner if the deployment changes materially. Filed permanently as part of the firm’s UK GDPR audit trail.

What is the decision rule for whether you need one?

Three questions. Does the AI system process personal data of any kind. Is the processing systematic, meaning routine or repeated rather than one-off. Does the processing pose high risk to individuals, where high risk includes large-scale profiling, special category data, automated decisions with legal effects, or processing that affects vulnerable groups.

Two yeses and a probable third trigger Article 35. When in doubt, run one anyway. A 4-6 page document costs the firm a few hours of work; a regulatory finding that the firm should have done a DPIA and did not is a far more expensive position.

Worked example one: a law firm using AI to process client matter information

The firm has rolled out a contract-review AI that ingests client documents and extracts key terms. The data is personal (clients, third parties named in files) and carries professional confidentiality. Processing is systematic. The risk profile includes confidentiality breach, AI hallucination affecting client advice, and Article 22 if AI output influences case strategy. DPIA required.

The completed DPIA documents the lawful basis, the SRA-aligned mitigations (mandatory human review, partner sign-off on advice), and the data privacy controls (DPA with vendor, training disabled, encryption at rest and in transit). Reviewed annually, with an interim review triggered if the firm extends the tool to new matter types.

Worked example two: a healthcare practice using AI for clinical documentation

The practice uses AI to draft clinical notes from clinician dictation. Patient health data is Article 9 special category. Processing is systematic. The Article 9 element pushes the risk threshold up automatically. DPIA required, and the bar is higher than for ordinary personal data.

The document covers patient consent processes, the lawful basis (typically explicit consent under Article 9), technical safeguards, and clinical-quality controls (clinician review of AI-generated notes before they enter the medical record).

Worked example three: e-commerce personalisation at scale

An e-commerce business uses AI to personalise product recommendations from customer purchase history and browsing behaviour. Personal data, systematic, large-scale profiling. Not Article 9 unless the products are health-related, but the profiling element typically meets the high-risk threshold under ICO guidance. DPIA likely required. The document covers the lawful basis (typically consent or legitimate interest), the privacy notice updates, the customer rights process (opt-out, access, rectification), and the AI vendor’s data protection controls.

Worked example four: manufacturing summarising internal process documentation

A manufacturing firm uses an AI tool to summarise internal process documentation and operating procedures. The documents do not contain personal data. The processing is systematic but the personal-data trigger is missing. DPIA not required. The firm should still maintain a tool inventory entry and consider the OWASP LLM risk categories, but the formal Article 35 obligation is not engaged.

Worked example five: brainstorming with no personal data input

A small business uses free ChatGPT to brainstorm marketing ideas. No personal data is input. The processing is occasional rather than routine. DPIA not required. The business should still have a policy and a data classification rule that keeps personal data out of free public tools, but Article 35 is not triggered for this use.

How does this fit with the rest of the SME governance stack?

The DPIA is one of three documents the SME maintains. The AI policy (2-3 pages) is the firm-wide rule set. The risk register (one page) is the active management tool. The DPIA (4-6 pages per material deployment) is the per-deployment legal requirement under Article 35. Together they cover the firm’s UK GDPR position on AI without requiring enterprise governance overhead.

If the firm you run has rolled out an AI tool that touches customer or client data, and you are uncertain whether a DPIA was required or whether one was completed, book a conversation.

Sources

  • UK GDPR Article 35 DPIA. Source.
  • ICO DPIA template. Source.
  • ICO list of high-risk processing examples. Source.
  • UK GDPR Article 9 special category data. Source.
  • UK GDPR Article 22 automated decision-making. Source.
  • National Institute of Standards and Technology (2023). AI Risk Management Framework (AI RMF 1.0). Establishes measurement rigour and uncertainty quantification as core governance practice. Source.
  • National Association of Corporate Directors (2025). AI Friend and Foe, Director's Handbook on AI Oversight. Foundational governance principles for board-level AI oversight, transparency, risk frameworks and stakeholder communication. Source.
  • Information Commissioner's Office. Guidance on AI and data protection under UK GDPR. The UK regulator's reference for data-protection obligations applied to AI systems. Source.

Frequently asked questions

When is a DPIA legally required for an AI deployment?

Under UK GDPR Article 35, a DPIA is required when processing is likely to result in a high risk to the rights and freedoms of individuals. The ICO's published guidance suggests systematic AI processing of personal data is often in this category. The practical test is three questions: does the AI process personal data, is the processing systematic, does it pose high risk (large-scale profiling, special category data, automated decisions, vulnerable groups). Two yeses and a probable third trigger the requirement.

What does a DPIA actually contain?

Description of the processing (purpose, data, scope). Assessment of necessity and proportionality. Identification of risks to individuals. Measures to mitigate those risks. The ICO publishes a template; the SME version fits 4-6 pages, signed by the MD, reviewed annually or when the deployment changes materially. Filed alongside the AI policy and the risk register.

What happens if I deploy AI without doing a DPIA when one is required?

It is a material UK GDPR breach. Regulatory exposure includes ICO fines, enforcement notices, and warnings. Practical exposure surfaces if a data incident later occurs and the DPIA was never completed: the firm cannot show it had assessed risks before deployment, which is what Article 35 exists to require. Running the DPIA before deployment is the cheaper path.

Where should an SME find a DPIA template?

The ICO publishes a free template, downloadable from their website. It is the most usable starting point for an SME because it is calibrated to UK GDPR specifically. Alternatives include the European Data Protection Board template and templates published by professional bodies (ICAEW, Law Society). A bespoke document is rarely necessary; the published template covers the required content.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation