A managing director I spoke with runs a 50-person UK financial services consultancy. He was bidding for a £180,000 contract with a top-30 UK accountancy firm. The procurement questionnaire arrived on a Tuesday. Question 17 read, describe your responsible AI governance practice, provide your AI system inventory, your bias-testing protocols, the named individual accountable for AI governance, your transparency notice for affected individuals, and evidence of vendor due diligence on any AI systems used in delivery.
He sat at his desk and counted, on the back of an envelope, six AI systems quietly in use across his firm. A recruitment screener his HR lead trialled last year. A contract review tool the legal function had bought. ChatGPT for client correspondence. A credit-scoring API embedded in one of their advisory products. A document classification tool. A chatbot on the website. He had no policy, no named lead, no inventory and no idea whether the recruitment vendor had done bias testing. He had eight days to respond. He needed to know what regulators actually require, what counts as a credible answer, and what a realistic three-to-six-month roadmap looks like to make this answer easy next time.
The good news is that the answer to all three questions is the same answer, and it is not a six-figure governance programme.
What is responsible AI?
Responsible AI is the documented set of practices that makes your AI systems fair, transparent, accountable, privacy-respecting and reliable, and lets you demonstrate it to a regulator, a customer or an affected individual. The Information Commissioner’s Office frames it in accountability terms. You must be able to show you considered the impacts, mitigated the harms, and kept a documented decision trail.
The term gets confused with three adjacent ideas. AI ethics is the philosophical inquiry into fairness, moral responsibility and the rights of people affected by automated decisions. AI governance is the organisational structure layer, the committees, ethics reviews and architectural sign-offs that larger enterprises run. AI safety is the frontier-risk research domain concerned with alignment of more capable systems. Responsible AI sits in the middle. Practitioner-friendly, operational, and proportionate to the size of the firm doing the work.
Why does it matter for your business?
The pressure point in 2026 is procurement. Large customers face their own regulatory exposure under UK GDPR and the EU AI Act, and they have started pushing those questions down the supply chain. A firm without a coherent answer signs indemnities that absorb AI-harm liability, or loses the contract to a competitor with the answer ready. This is happening now across financial services, insurance, recruitment, healthcare and government contracting.
The regulatory baseline behind the procurement question has three layers. UK GDPR applies the moment any AI system processes personal data, with the right to meaningful information about solely automated significant decisions under Article 22, a DPIA requirement on high-risk processing, and ICO maximum fines of the higher of £20m or 4 per cent of turnover. The EU AI Act binds you if your system or its outputs touch EU residents, with deployer obligations from 2 August 2026 and fines up to 3 per cent of turnover for high-risk system breaches and up to 7 per cent for prohibited practices. Sector regulators add their own requirements. The FCA expects vendor due diligence and accountability for outsourced AI decisions. The MHRA requires validation, testing and ongoing monitoring for AI in healthcare. The ICO’s 2024 to 2027 strategic plan names AI governance as an enforcement priority.
The honest position is that the biggest cost of doing nothing tends to be the cumulative loss of business to competitors with better governance practices, well before any regulator gets involved.
Where will you actually meet it?
You will meet responsible AI in four places. Vendor due diligence questionnaires asking for an inventory, bias-testing evidence and a named lead. ICO and FCA guidance translating UK GDPR principles into specific AI obligations. Vendor responsible-AI claims from Microsoft, IBM and Google. And ISO/IEC 42001:2023, the first auditable AI management systems standard, accepted as third-party evidence of practice.
Behind those four touchpoints sits the five-pillar consensus, the same shape across every major framework. Fairness and bias mitigation, the system should not make systematically different decisions about people based on protected characteristics, with outcome testing across demographic groups before deployment. Transparency and explainability, people affected should know the system exists and understand the main factors, with proportionate transparency rather than perfect interpretability. Accountability and governance, one named person owns the inventory, decisions are documented, complaints have a route. Privacy and security, lawful basis, data minimisation, encryption, restricted access, audit logs. Reliability and safety, regular retesting against fresh data, monitoring for model drift, clear documentation of capability and limitation, and rapid rollback if problems are discovered.
The pillars tell you why responsible AI exists. They do not tell you what to do on Monday morning when the questionnaire arrives.
When to act and when certification justifies its cost
Act now on a proportionate five-step starter, and reserve full ISO/IEC 42001 certification for the situations where customers specifically ask for it. A £1m to £10m firm can put the starter in place in three to six months, mostly through staff time, at around £10,000 to £30,000 if external support is brought in. The certificate sits on top of the starter, not in place of it.
The five steps are a documented AI use inventory, a one-page AI policy, a named AI lead, a vendor due-diligence checklist, and a transparency notice with a complaint route. The inventory captures every system using machine learning, automated decision-making or algorithmic prediction, with purpose, data, decisions affected, vendor and deployment date. The policy commits to fairness, transparency and accountability, with a rule that humans review consequential decisions and a published route for affected individuals to request human review. The named lead has authority to approve or reject AI deployments, owns the inventory, and handles complaints. The vendor checklist runs eight to ten questions covering purpose, data, bias testing, transparency, security, SLAs, liability and audit rights against any new AI system. The transparency notice goes out wherever individuals are subject to an AI decision, with a 30-day response window aligning with UK GDPR data subject rights.
ISO/IEC 42001 certification is worth the additional cost in three specific situations. Your customers ask for it in writing. You operate in a regulated sector where the certificate is becoming the procurement default. Your turnover is above £10m and the audit fee, around £10,000 to £15,000 for a small firm with one or two AI systems, is a small share of the contracts the certificate unlocks. Outside those situations the certificate often costs more than the business value it returns.
Related concepts
A handful of adjacent terms keep appearing in procurement questionnaires. AI bias is the fairness pillar in practice, the systematic error in outcomes across demographic groups. An AI audit trail is the accountability pillar, the record of what the system did and on whose data. Explainable AI is the transparency pillar, the techniques that let you describe why the system reached a given decision.
The regulatory neighbours matter too. The EU AI Act sets the risk-based regulatory frame for any system or output touching the EU, with deployer obligations from 2 August 2026. UK GDPR Article 22 sets the right to meaningful information about solely automated significant decisions. Human-in-the-loop oversight is the operational practice that makes the reliability and accountability pillars real, the gate where a person reviews any AI output that significantly affects another person.
Responsible AI is the umbrella under which all of them sit. The reason to know the umbrella term is that procurement questions, regulator guidance and customer contracts now use it as the heading. If you have a credible answer to “describe your responsible AI practice”, the rest of the conversation gets a great deal shorter.



