A small recruitment-services owner I was speaking to last month had her CV screening tool live for six weeks. The vendor’s deck said “bias-free, audited for fairness”. Then a solicitor’s letter arrived from a rejected candidate, citing the Equality Act 2010 and asking for the demographic breakdown of shortlisted candidates over the last three months.
She forwarded it to the vendor. The vendor sent back a fairness audit. The audit measured one fairness metric across two protected groups. The solicitor was asking about a third. That is the moment “fair” stopped being a setting on the dashboard and became a choice the vendor had made on her behalf without telling her which one.
What is bias in AI?
Bias in AI means the system produces systematically different outcomes for different groups of people. Some of that is lawful and useful, like insurance pricing on age and claims history. Some of it breaches the Equality Act 2010, like a CV screener that downgrades applications from women-only colleges. The line is whether the differential effect rests on a protected characteristic without legitimate justification.
The US National Institute of Standards and Technology (NIST) catalogues six places bias enters: historical (the data records past discrimination), representation (the training set under-samples a group), measurement (a feature is a proxy for a protected characteristic, like “years since graduation” for age), aggregation (one model trained across distinct populations hides per-group failures), evaluation (the test data does not match deployment), and deployment (the system is used for a job it was not designed for). It is rarely a single defect.
Why it matters for your business
Academic work has proved formally that you cannot satisfy every reasonable definition of fairness at once. Demographic parity (equal approval rates across groups), equalised odds (equal accuracy across groups), and calibration (equal predictive validity across groups) cannot all hold simultaneously except in trivial cases. That is mathematics, not engineering laziness. It means every “fair AI” claim is a values choice the vendor has made for you in silence.
The Equality Act 2010 does not care whose silence it was. The duty sits on the organisation making the decision, not on the software supplier. If your AI tool produces a disparate impact on a protected group without legitimate justification, you face a tribunal claim or a regulatory query, and the burden shifts to you to show the system is “a proportionate means of achieving a legitimate aim”. The vendor’s contract may give you recourse against them later. The letter still lands at your door first.
Where you will meet it
You will meet algorithmic bias in five operational surfaces inside a typical UK SME: recruitment screening, lending and credit decisions, insurance underwriting, performance management, and customer-facing chatbots that handle complaints. Each one is an Equality Act 2010 surface. Each one has UK regulator guidance, live case law, or both. The owner’s question is which of the five bites first.
The named cases are worth knowing because they map the territory. R (Bridges) v Chief Constable of South Wales Police [2020] established that deploying facial recognition without an adequate Equality Impact Assessment was unlawful, and that the risk of indirect discrimination should have been assessed in advance. Pa Edrissa Manjang v Uber Eats settled in 2024, where a Black courier won an Equality Act 2010 claim after Uber’s facial verification repeatedly mis-matched him; he obtained the underlying selfies through a UK GDPR data access request, which is the detail every employment lawyer now flags. Mobley v Workday in the US was granted preliminary class certification in May 2025 on AI age discrimination in recruitment, the first major case to clear that procedural hurdle. Buolamwini and Gebru’s Gender Shades study (2018) showed 34.7 percent error rates for darker-skinned women in commercial face classifiers versus 0.8 percent for lighter-skinned men, the textbook representation-bias case. Amazon retired its internal recruiting tool in 2018 after the model rediscovered gender via proxy tokens like “women’s college”, even though gender had been removed as a feature.
What to ask the vendor
When a vendor uses the phrase “bias-free” or “audited for fairness”, treat it as the start of the conversation, not the end. The four questions worth asking are: which fairness metric have you optimised for, on which protected groups did you measure, can you show me a Model Card or third-party audit dated within the last twelve months, and will you re-test on our population before go-live. Vendor maturity shows up in the answers.
The first two questions are diagnostic. A vendor who cannot name a metric (demographic parity, equalised odds, calibration) is selling marketing. A vendor who has only measured against gender and age has not measured against race, disability, or the seven other protected characteristics in the Equality Act. The third question is evidential. Model Cards (the Google-pioneered standard for transparent reporting on AI systems) and audits from firms like ORCAA, Holistic AI, or BABL AI give you something a tribunal would recognise as governance. The fourth question is operational. A vendor’s training population is rarely yours, and bias on your data is what you will be asked to defend.
One trap to flag. “Fairness through unawareness”, the practice of stripping protected characteristics out of the dataset and assuming the problem is solved, does not work and the ICO is explicit on the point. The model rediscovers gender, race, or age through proxy variables, exactly as the Amazon recruiting tool did. The honest test is to keep the protected characteristics in the evaluation data so you can measure disparate impact, while keeping them out of the features the model uses to decide.
When to route it onwards
Route this onwards the moment a real decision about a real person is at stake. Awareness is the owner’s job, but specialist counsel and specialist auditors do the work. The ICO has published guidance on fairness, bias and discrimination in AI, plus a Discrimination and Bias Toolkit, both written for non-specialists. The Equality and Human Rights Commission has guidance on AI in public services and AI in employment. Free, short, and worth reading.
For anything that touches a hiring, lending, insurance, or performance decision, route to employment counsel before go-live, not after the solicitor’s letter arrives. For higher-stakes deployments, commission an independent bias audit from a third-party firm and keep the report. If you serve EU customers, the EU AI Act’s Article 10 and Annex III define recruitment, performance evaluation, credit scoring, and insurance risk assessment as high-risk systems with bias-mitigation obligations from 2 August 2026. None of this is legal advice, and none of it is something the owner does alone. The owner’s job is to know the question is real, ask it of the vendor, and route it to the people who can answer it before a candidate’s solicitor does it for them.



