What is responsible AI? The procurement question that decides 2026 contracts

Two people at a small office meeting room table reading a printed procurement questionnaire, one holding a pen, the other pointing at a clause on the page
TL;DR

Responsible AI is the documented set of practices that makes your AI systems fair, transparent, accountable, privacy-respecting and reliable, and that lets you demonstrate it to a regulator or customer. In 2026 it has stopped being an ethics exercise. Procurement teams now ask SME suppliers for an AI inventory, a named lead and bias-testing evidence before signing. A £1m to £10m firm can put a defensible answer in place in three to six months.

Key takeaways

- Responsible AI is the operational layer between AI ethics, AI governance and AI safety. It is what you actually do, not what you believe. - The five international pillars are fairness, transparency, accountability, privacy and security, and reliability. Microsoft, IBM, Google, the OECD and ISO/IEC 42001:2023 all converge on the same shape. - The 2026 baseline for UK SMEs is UK GDPR plus the EU AI Act for any system or output touching the EU, plus sector regulators including the FCA, MHRA and ICO. - The commercial pressure is real and immediate. Large customers are pushing AI governance questions down the supply chain because they face fines of up to 4 per cent of turnover under UK GDPR and up to 7 per cent under the EU AI Act. - A proportionate five-step starter, inventory, one-page policy, named lead, vendor due-diligence checklist, transparency notice with a complaint route, gets a £1m to £10m firm to a defensible position in three to six months for £10,000 to £30,000.

A managing director I spoke with runs a 50-person UK financial services consultancy. He was bidding for a £180,000 contract with a top-30 UK accountancy firm. The procurement questionnaire arrived on a Tuesday. Question 17 read, describe your responsible AI governance practice, provide your AI system inventory, your bias-testing protocols, the named individual accountable for AI governance, your transparency notice for affected individuals, and evidence of vendor due diligence on any AI systems used in delivery.

He sat at his desk and counted, on the back of an envelope, six AI systems quietly in use across his firm. A recruitment screener his HR lead trialled last year. A contract review tool the legal function had bought. ChatGPT for client correspondence. A credit-scoring API embedded in one of their advisory products. A document classification tool. A chatbot on the website. He had no policy, no named lead, no inventory and no idea whether the recruitment vendor had done bias testing. He had eight days to respond. He needed to know what regulators actually require, what counts as a credible answer, and what a realistic three-to-six-month roadmap looks like to make this answer easy next time.

The good news is that the answer to all three questions is the same answer, and it is not a six-figure governance programme.

What is responsible AI?

Responsible AI is the documented set of practices that makes your AI systems fair, transparent, accountable, privacy-respecting and reliable, and lets you demonstrate it to a regulator, a customer or an affected individual. The Information Commissioner’s Office frames it in accountability terms. You must be able to show you considered the impacts, mitigated the harms, and kept a documented decision trail.

The term gets confused with three adjacent ideas. AI ethics is the philosophical inquiry into fairness, moral responsibility and the rights of people affected by automated decisions. AI governance is the organisational structure layer, the committees, ethics reviews and architectural sign-offs that larger enterprises run. AI safety is the frontier-risk research domain concerned with alignment of more capable systems. Responsible AI sits in the middle. Practitioner-friendly, operational, and proportionate to the size of the firm doing the work.

Why does it matter for your business?

The pressure point in 2026 is procurement. Large customers face their own regulatory exposure under UK GDPR and the EU AI Act, and they have started pushing those questions down the supply chain. A firm without a coherent answer signs indemnities that absorb AI-harm liability, or loses the contract to a competitor with the answer ready. This is happening now across financial services, insurance, recruitment, healthcare and government contracting.

The regulatory baseline behind the procurement question has three layers. UK GDPR applies the moment any AI system processes personal data, with the right to meaningful information about solely automated significant decisions under Article 22, a DPIA requirement on high-risk processing, and ICO maximum fines of the higher of £20m or 4 per cent of turnover. The EU AI Act binds you if your system or its outputs touch EU residents, with deployer obligations from 2 August 2026 and fines up to 3 per cent of turnover for high-risk system breaches and up to 7 per cent for prohibited practices. Sector regulators add their own requirements. The FCA expects vendor due diligence and accountability for outsourced AI decisions. The MHRA requires validation, testing and ongoing monitoring for AI in healthcare. The ICO’s 2024 to 2027 strategic plan names AI governance as an enforcement priority.

The honest position is that the biggest cost of doing nothing tends to be the cumulative loss of business to competitors with better governance practices, well before any regulator gets involved.

Where will you actually meet it?

You will meet responsible AI in four places. Vendor due diligence questionnaires asking for an inventory, bias-testing evidence and a named lead. ICO and FCA guidance translating UK GDPR principles into specific AI obligations. Vendor responsible-AI claims from Microsoft, IBM and Google. And ISO/IEC 42001:2023, the first auditable AI management systems standard, accepted as third-party evidence of practice.

Behind those four touchpoints sits the five-pillar consensus, the same shape across every major framework. Fairness and bias mitigation, the system should not make systematically different decisions about people based on protected characteristics, with outcome testing across demographic groups before deployment. Transparency and explainability, people affected should know the system exists and understand the main factors, with proportionate transparency rather than perfect interpretability. Accountability and governance, one named person owns the inventory, decisions are documented, complaints have a route. Privacy and security, lawful basis, data minimisation, encryption, restricted access, audit logs. Reliability and safety, regular retesting against fresh data, monitoring for model drift, clear documentation of capability and limitation, and rapid rollback if problems are discovered.

The pillars tell you why responsible AI exists. They do not tell you what to do on Monday morning when the questionnaire arrives.

When to act and when certification justifies its cost

Act now on a proportionate five-step starter, and reserve full ISO/IEC 42001 certification for the situations where customers specifically ask for it. A £1m to £10m firm can put the starter in place in three to six months, mostly through staff time, at around £10,000 to £30,000 if external support is brought in. The certificate sits on top of the starter, not in place of it.

The five steps are a documented AI use inventory, a one-page AI policy, a named AI lead, a vendor due-diligence checklist, and a transparency notice with a complaint route. The inventory captures every system using machine learning, automated decision-making or algorithmic prediction, with purpose, data, decisions affected, vendor and deployment date. The policy commits to fairness, transparency and accountability, with a rule that humans review consequential decisions and a published route for affected individuals to request human review. The named lead has authority to approve or reject AI deployments, owns the inventory, and handles complaints. The vendor checklist runs eight to ten questions covering purpose, data, bias testing, transparency, security, SLAs, liability and audit rights against any new AI system. The transparency notice goes out wherever individuals are subject to an AI decision, with a 30-day response window aligning with UK GDPR data subject rights.

ISO/IEC 42001 certification is worth the additional cost in three specific situations. Your customers ask for it in writing. You operate in a regulated sector where the certificate is becoming the procurement default. Your turnover is above £10m and the audit fee, around £10,000 to £15,000 for a small firm with one or two AI systems, is a small share of the contracts the certificate unlocks. Outside those situations the certificate often costs more than the business value it returns.

A handful of adjacent terms keep appearing in procurement questionnaires. AI bias is the fairness pillar in practice, the systematic error in outcomes across demographic groups. An AI audit trail is the accountability pillar, the record of what the system did and on whose data. Explainable AI is the transparency pillar, the techniques that let you describe why the system reached a given decision.

The regulatory neighbours matter too. The EU AI Act sets the risk-based regulatory frame for any system or output touching the EU, with deployer obligations from 2 August 2026. UK GDPR Article 22 sets the right to meaningful information about solely automated significant decisions. Human-in-the-loop oversight is the operational practice that makes the reliability and accountability pillars real, the gate where a person reviews any AI output that significantly affects another person.

Responsible AI is the umbrella under which all of them sit. The reason to know the umbrella term is that procurement questions, regulator guidance and customer contracts now use it as the heading. If you have a credible answer to “describe your responsible AI practice”, the rest of the conversation gets a great deal shorter.

Sources

Information Commissioner's Office (2025). Guidance on AI and data protection. The canonical UK regulatory frame for SMEs and the source for accountability and Article 22 obligations. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/ International Organization for Standardization (2023). ISO/IEC 42001:2023, Artificial intelligence management systems. The first auditable AI management systems standard, published December 2023. https://www.iso.org/standard/81230.html European Commission (2024). EU Artificial Intelligence Act. The risk-based regulation with deployer obligations from 2 August 2026 and fines up to 7 per cent of turnover for prohibited practices. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai Microsoft (2024). Microsoft Responsible AI Standard. The six-principle vendor-leader reference, organised around fairness, accountability, transparency, privacy, security and safety. https://www.microsoft.com/en-us/ai/principles-and-approach IBM (2024). What is AI ethics? The three-pillar industry-leader frame around fairness, explanation and robustness. https://www.ibm.com/topics/ai-ethics OECD (2019, updated 2024). OECD AI Principles. The high-level UK and international policy anchor referenced by UK regulators. https://oecd.ai/en/ai-principles National Institute of Standards and Technology (2023). AI Risk Management Framework. The US-origin but internationally referenced governance baseline. https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf Buolamwini, J. and Gebru, T. (2018). Gender Shades: Intersectional accuracy disparities in commercial gender classification. The foundational study on bias in commercial AI systems. https://proceedings.mlr.press/v81/buolamwini18a.html Mitchell, M. et al (2019). Model Cards for Model Reporting. The documentation pattern referenced by Microsoft, Google and ISO/IEC 42001. https://arxiv.org/abs/1810.03993 Financial Conduct Authority (2024). AI Update. Sector regulator position on AI governance, vendor due diligence and accountability for outsourced AI decisions. https://www.fca.org.uk/publications/corporate-documents/ai-update

Frequently asked questions

We are a 30-person services firm with no formal AI programme. What is the minimum we should have in place if a procurement questionnaire arrives next month?

Four artefacts cover the majority of the questions. A one-page list of every AI system in actual use, including ChatGPT and any vendor tools with AI inside them. A one-page AI policy signed off by a director and published. A named individual accountable for AI governance, even part-time. A short vendor due-diligence checklist you have run against your top two or three AI vendors. None of these requires technical expertise. They require an afternoon of honest documentation.

Do we need ISO/IEC 42001 certification or is a written policy enough?

For most SMEs a written policy with documented practice is enough for the 2026 procurement questions. ISO/IEC 42001:2023 is the first auditable AI management systems standard and is worth pursuing if you sell into regulated industries, your customers specifically ask for it, or you turn over £10m or more. Audit fees for a small firm with one or two AI systems sit around £10,000 to £15,000. Below that scale, the certificate often costs more than the business value it returns.

We use ChatGPT and a few vendor tools. Are we even in scope for any of this?

Yes. UK GDPR applies the moment any AI system uses personal data, which includes drafting client correspondence in ChatGPT and any vendor tool that screens, scores or classifies people. The EU AI Act binds you if your AI system or its outputs touch EU residents, with high-risk-system fines up to 3 per cent of turnover. Sector regulators add their own requirements. Scope is not the question. The question is whether you can demonstrate proportionate practice when asked.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation