What is bias in AI? Why it matters for your business

Two people across a kitchen table reading a printed letter together, one explaining, the other listening
TL;DR

Bias in AI means the system produces systematically different outcomes for different groups of people. It is not a defect a vendor can patch out, because academic work has proved you cannot satisfy every reasonable definition of fairness at once. Any "bias-free" claim has to specify which fairness metric and which protected group, otherwise it is marketing, and your exposure under the UK Equality Act 2010 is unchanged either way.

Key takeaways

- Bias in AI is a structural property of how systems learn from data, not a single defect a vendor can patch out. - Academic work has proved you cannot satisfy every reasonable fairness definition at once, so "fair" is always a values choice the vendor has made on your behalf. - Under the UK Equality Act 2010, indirect discrimination through an algorithm is your liability whether the vendor said the word "fair" or not. - Recruitment, lending, insurance, performance management, and customer-facing chatbots are the live UK surfaces where bias becomes legal exposure. - The owner's job is to demand a Model Card or third-party audit naming the metric and the protected groups tested, not to audit the model themselves.

A small recruitment-services owner I was speaking to last month had her CV screening tool live for six weeks. The vendor’s deck said “bias-free, audited for fairness”. Then a solicitor’s letter arrived from a rejected candidate, citing the Equality Act 2010 and asking for the demographic breakdown of shortlisted candidates over the last three months.

She forwarded it to the vendor. The vendor sent back a fairness audit. The audit measured one fairness metric across two protected groups. The solicitor was asking about a third. That is the moment “fair” stopped being a setting on the dashboard and became a choice the vendor had made on her behalf without telling her which one.

What is bias in AI?

Bias in AI means the system produces systematically different outcomes for different groups of people. Some of that is lawful and useful, like insurance pricing on age and claims history. Some of it breaches the Equality Act 2010, like a CV screener that downgrades applications from women-only colleges. The line is whether the differential effect rests on a protected characteristic without legitimate justification.

The US National Institute of Standards and Technology (NIST) catalogues six places bias enters: historical (the data records past discrimination), representation (the training set under-samples a group), measurement (a feature is a proxy for a protected characteristic, like “years since graduation” for age), aggregation (one model trained across distinct populations hides per-group failures), evaluation (the test data does not match deployment), and deployment (the system is used for a job it was not designed for). It is rarely a single defect.

Why it matters for your business

Academic work has proved formally that you cannot satisfy every reasonable definition of fairness at once. Demographic parity (equal approval rates across groups), equalised odds (equal accuracy across groups), and calibration (equal predictive validity across groups) cannot all hold simultaneously except in trivial cases. That is mathematics, not engineering laziness. It means every “fair AI” claim is a values choice the vendor has made for you in silence.

The Equality Act 2010 does not care whose silence it was. The duty sits on the organisation making the decision, not on the software supplier. If your AI tool produces a disparate impact on a protected group without legitimate justification, you face a tribunal claim or a regulatory query, and the burden shifts to you to show the system is “a proportionate means of achieving a legitimate aim”. The vendor’s contract may give you recourse against them later. The letter still lands at your door first.

Where you will meet it

You will meet algorithmic bias in five operational surfaces inside a typical UK SME: recruitment screening, lending and credit decisions, insurance underwriting, performance management, and customer-facing chatbots that handle complaints. Each one is an Equality Act 2010 surface. Each one has UK regulator guidance, live case law, or both. The owner’s question is which of the five bites first.

The named cases are worth knowing because they map the territory. R (Bridges) v Chief Constable of South Wales Police [2020] established that deploying facial recognition without an adequate Equality Impact Assessment was unlawful, and that the risk of indirect discrimination should have been assessed in advance. Pa Edrissa Manjang v Uber Eats settled in 2024, where a Black courier won an Equality Act 2010 claim after Uber’s facial verification repeatedly mis-matched him; he obtained the underlying selfies through a UK GDPR data access request, which is the detail every employment lawyer now flags. Mobley v Workday in the US was granted preliminary class certification in May 2025 on AI age discrimination in recruitment, the first major case to clear that procedural hurdle. Buolamwini and Gebru’s Gender Shades study (2018) showed 34.7 percent error rates for darker-skinned women in commercial face classifiers versus 0.8 percent for lighter-skinned men, the textbook representation-bias case. Amazon retired its internal recruiting tool in 2018 after the model rediscovered gender via proxy tokens like “women’s college”, even though gender had been removed as a feature.

What to ask the vendor

When a vendor uses the phrase “bias-free” or “audited for fairness”, treat it as the start of the conversation, not the end. The four questions worth asking are: which fairness metric have you optimised for, on which protected groups did you measure, can you show me a Model Card or third-party audit dated within the last twelve months, and will you re-test on our population before go-live. Vendor maturity shows up in the answers.

The first two questions are diagnostic. A vendor who cannot name a metric (demographic parity, equalised odds, calibration) is selling marketing. A vendor who has only measured against gender and age has not measured against race, disability, or the seven other protected characteristics in the Equality Act. The third question is evidential. Model Cards (the Google-pioneered standard for transparent reporting on AI systems) and audits from firms like ORCAA, Holistic AI, or BABL AI give you something a tribunal would recognise as governance. The fourth question is operational. A vendor’s training population is rarely yours, and bias on your data is what you will be asked to defend.

One trap to flag. “Fairness through unawareness”, the practice of stripping protected characteristics out of the dataset and assuming the problem is solved, does not work and the ICO is explicit on the point. The model rediscovers gender, race, or age through proxy variables, exactly as the Amazon recruiting tool did. The honest test is to keep the protected characteristics in the evaluation data so you can measure disparate impact, while keeping them out of the features the model uses to decide.

When to route it onwards

Route this onwards the moment a real decision about a real person is at stake. Awareness is the owner’s job, but specialist counsel and specialist auditors do the work. The ICO has published guidance on fairness, bias and discrimination in AI, plus a Discrimination and Bias Toolkit, both written for non-specialists. The Equality and Human Rights Commission has guidance on AI in public services and AI in employment. Free, short, and worth reading.

For anything that touches a hiring, lending, insurance, or performance decision, route to employment counsel before go-live, not after the solicitor’s letter arrives. For higher-stakes deployments, commission an independent bias audit from a third-party firm and keep the report. If you serve EU customers, the EU AI Act’s Article 10 and Annex III define recruitment, performance evaluation, credit scoring, and insurance risk assessment as high-risk systems with bias-mitigation obligations from 2 August 2026. None of this is legal advice, and none of it is something the owner does alone. The owner’s job is to know the question is real, ask it of the vendor, and route it to the people who can answer it before a candidate’s solicitor does it for them.

Sources

Equality and Human Rights Commission. Protected characteristics under the Equality Act 2010. The list of nine characteristics that any UK decision system, including an AI tool, must not discriminate against. https://www.equalityhumanrights.com/equality/equality-act-2010/protected-characteristics Information Commissioner's Office. What about fairness, bias and discrimination? Regulator framing of how UK GDPR fairness intersects with discrimination law. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-fairness-in-ai/what-about-fairness-bias-and-discrimination/ Information Commissioner's Office. Discrimination and Bias Toolkit. The operational checklist the ICO expects organisations using AI for decisions to be able to point at. https://ico.org.uk/for-organisations/advice-and-services/audits/data-protection-audit-framework/toolkits/artificial-intelligence/discrimination-and-bias/ NIST (March 2022). Special Publication 1270, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence. The canonical taxonomy of the six types of AI bias used throughout this post. https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf EU AI Act, Article 10 and Annex III. Defines recruitment, performance evaluation, credit scoring, and insurance risk assessment as high-risk systems with bias-mitigation obligations from 2 August 2026. https://artificialintelligenceact.eu/article/10/ R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058. Court of Appeal ruling that police facial recognition deployment without an adequate Equality Impact Assessment was unlawful. https://www.judiciary.uk/wp-content/uploads/2020/08/R-Bridges-v-CC-South-Wales-ors-Judgment.pdf TechCrunch (2024). Pa Edrissa Manjang v Uber Eats UK settlement, where a Black courier won an Equality Act 2010 claim after Uber's facial verification repeatedly mis-matched him. https://techcrunch.com/2024/03/28/uber-eats-ai-bias-settlement/ Miami Law Review. Mobley v Workday and the legal limits of AI hiring. Class action over alleged AI age discrimination in recruitment, granted preliminary class certification in May 2025. https://lawreview.law.miami.edu/help-wanted-screened-by-algorithms-mobley-v-workday-and-the-legal-limits-of-ai-hiring/ Buolamwini and Gebru (2018). Gender Shades, intersectional accuracy disparities in commercial gender classification. The textbook representation-bias study, 34.7 percent error rate for darker-skinned women versus 0.8 percent for lighter-skinned men. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf arXiv 2302.06347. Revisiting the Impossibility Theorem in Practice. The accessible reference for the formal result that demographic parity, equalised odds, and calibration cannot all hold at once. https://arxiv.org/pdf/2302.06347.pdf

Frequently asked questions

Is "bias-free AI" a real thing a vendor can sell me?

No. Every AI system that makes decisions about people will produce different outcomes for different groups, because the data it learns from carries the patterns of the world. The honest version of the claim is that the vendor has measured one fairness metric on one or two protected groups and got a result they consider acceptable. Ask them which metric, which groups, and to show you the audit.

Whose problem is it if my AI vendor's tool turns out to discriminate?

Yours, in practice. Under the UK Equality Act 2010 the duty sits on the organisation making the decision, not on the software supplier. The vendor's contract may give you some recourse against them, but a tribunal claim from a rejected candidate or a refused customer lands at your door first. That is the reason the procurement question matters.

Do I need to commission a bias audit before I deploy?

For higher-stakes uses (hiring, lending, insurance pricing, performance management) yes, or at minimum demand the vendor's third-party audit and have it reviewed by someone who can read it. For lower-stakes internal uses the bar is lower, but you still need a Data Protection Impact Assessment if the system affects significant decisions about individuals. The ICO has guidance and a toolkit on this.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation