The MD of an 18-person recruitment agency reading her operations lead’s proposal to roll out an AI CV-screening tool that ranks applicants and shortlists the top 20 percent. The proposal mentions “improved consistency” and “reduced unconscious bias”. The MD knows enough about UK GDPR to recall there is a rule about automated decisions affecting people. She is uncertain whether shortlisting counts as a decision. She asks her commercial lawyer. The lawyer’s answer: yes, shortlisting that determines who gets to interview is a decision with significant effects. Article 22 applies. The firm needs human review built in, and the privacy notice needs updating.
This is the moment where Article 22 stops being abstract law and becomes the structural shape of the deployment. The rule allows AI in decisions about people, with one specific limit: AI cannot be the sole decision-maker for decisions with legal or significant effects. The two positions look similar from a distance and play out very differently in workflow design.
What does Article 22 actually require?
UK GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing where those decisions produce legal or similarly significant effects. The narrow exceptions (necessary for the contract, authorised by law, explicit consent) all carry additional human safeguards: the right to obtain human review, to express the data subject’s point of view, and to contest the decision.
The article is a substantive limit on what AI can do, with conditions and procedures attached when the firm relies on an exception. Most SME deployments do not meet the exception conditions and need to comply with the general prohibition by ensuring meaningful human review in the decision workflow.
What does ‘solely’ mean in practice?
If a person makes the final decision after seeing the AI’s recommendation, the decision is not solely automated. If the AI ranks, scores, or sorts and a human merely rubber-stamps the output, the decision is effectively solely automated and Article 22 still applies. The ICO position is that meaningful human review requires three elements: authority to disagree with the AI, relevant context to assess the decision, and adequate time to do so.
The rubber-stamp test is the part most SME deployments fail. A hiring manager who clicks “approve shortlist” without reading the candidate files is rubber-stamping. A loan officer who signs off on every AI score above a threshold is rubber-stamping. The fix is workflow design that gives the human reviewer time, context, and authority to actually decide. The deciding human is the requirement; a human in the loop without decision authority does not satisfy the rule.
What counts as ‘legal or significant effects’?
The category is broader than most owners assume. Hiring decisions, including shortlisting that determines who gets to interview. Credit decisions. Eligibility for services or products. Customer suitability for regulated products like financial advice, insurance, or regulated healthcare. Content moderation decisions affecting users (account suspension, content removal). Healthcare access decisions, including triage that affects appointment timing.
The test is whether the decision has legal effects (a contract is or is not formed, a service is or is not granted) or similarly significant effects (an opportunity is materially lost, a service is materially restricted). If the answer is yes, Article 22 applies and human review is required.
Worked example: hiring and CV screening
The recruitment agency above wants to deploy an AI CV-screening tool that ranks applicants. The shortlist that determines who gets to interview is a decision with significant effects under Article 22. The fix is structural. The AI surfaces candidates the human reviewer would consider, with reasons. The human reviewer reads the candidate files, considers the AI’s reasoning, and makes the shortlist decision with documented reasoning.
The privacy notice on the firm’s careers page must disclose that AI is involved in the screening process, in plain language, with information on the logic and the significance for the candidate. Candidates can request human review of any AI-related decision. The deployment now meets Article 22 and the Articles 13/14 transparency requirements.
Worked example: lending and credit decisions
A small specialist lender deploys AI to score credit applications. AI as the sole basis for credit decisions breaches Article 22 and triggers FCA Consumer Duty for regulated firms. The fix is to use AI for portfolio analysis and triage, then route every application to a human credit officer who reads the file, considers the AI’s risk indicators, and decides with documented reasoning. Borderline cases get more time and a second-level review.
The Consumer Duty layer matters. The firm must show it acted to deliver good customer outcomes, which means AI use must not produce discriminatory or harmful outcomes. Pre-deployment bias testing and ongoing monitoring become part of the firm’s compliance stack.
Worked example: healthcare access
A GP practice deploys AI triage for inbound appointment requests. The AI prioritises urgent cases. The output affects who gets seen first, which is a decision with significant effects. The clinician must review and decide. AI flags urgency; clinician confirms and books. Patients are informed that AI is involved in triage.
Without the structural fix, the practice has handed clinical prioritisation to an algorithm, which breaches GMC professional standards and Article 22 simultaneously. With the fix, the AI is a productive tool that flags cases the clinician then assesses.
Worked example: customer suitability for FCA-regulated services
A small financial adviser firm deploys AI to assess customer suitability for regulated investment products. AI as the sole basis for suitability assessments breaches Article 22 and FCA SYSC requirements. The fix mirrors the lending example: AI generates suitability indicators, the regulated adviser reviews the case, takes responsibility for the recommendation, and documents the basis.
The adviser’s documented reasoning is what shows compliance during an FCA inspection. AI output without adviser reasoning is the gap that surfaces under inspection.
What does the privacy notice need to say?
Articles 13 and 14 require disclosure of automated decision-making in privacy notices. The notice must explain that AI is involved, in plain language, including the logic and the significance for the individual. The disclosure must be in place before the data is collected.
This is one of the most-missed compliance steps in SME AI deployments. Many firms add AI to their hiring or customer workflows and never update the privacy notice, which breaches Articles 13/14 even if the firm has implemented meaningful human review elsewhere. The privacy notice update is short (a paragraph or two) and needs to be in place before deployment.
If the firm you run is about to deploy AI in any decision-affecting workflow involving people, and you would like to talk through whether the deployment meets Article 22 and what the privacy notice should say, book a conversation.



