The veto check, when not to act on an AI recommendation no matter how confident it sounds

A woman pausing at her desk, looking past her laptop screen with a considered expression
TL;DR

Some AI recommendations belong in a category where the owner should decline to act on them regardless of how sound the analysis looks. Four situations trigger an automatic veto check: high-stakes irreversible decisions, decisions affecting individuals, decisions in regulated or fiduciary contexts, and decisions outside delegated AI authority. The check is a thirty-second gate that runs before analytical review, not after.

Key takeaways

- The veto check is a separate move from challenging the analysis, it runs first and asks whether this category of decision should be made with AI input at all. - Four situations trigger an automatic veto regardless of recommendation quality, irreversible decisions, decisions affecting individuals, regulated or fiduciary work, and decisions outside the owner's documented AI authority. - The check works as a thirty-second gate applied before the recommendation's reasoning is read, because reading well-written analysis first biases the reviewer toward accepting it. - A veto does not always mean rejection, it usually means reframing the underlying question in human terms with the AI analysis demoted to a data point rather than a verdict. - The discipline only scales when it is embedded in documented process, not concentrated in the owner's intuition, so team members can apply it when the owner is absent.

An owner sits with a recommendation her AI tool produced ten minutes ago, advising her to let a long-serving employee go. The numbers stack up. The reasoning is measured. The output cites cost-to-output ratios, productivity trends, and a fair-process script for the conversation. She reads it twice. The recommendation does not feel wrong. It feels too easy to act on, and she cannot work out whether that feeling is wisdom or hesitation.

This post is for owners in that moment. The discipline that helps is to recognise that some decisions belong in a category where the analysis quality is the second question, not the first. A veto check is what runs before the analysis is read. It takes thirty seconds, sits on principle rather than intuition, and it protects an owner from credible-sounding recommendations that should never have been actioned regardless of how well they read.

Why is a veto check a separate move from challenging the analysis?

The veto check asks a different question from the analytical review. The analytical review asks whether the recommendation is sound. The veto asks whether this category of decision should be made on the back of an AI recommendation at all. A recommendation can be analytically clean and still sit in a category where the owner should decline to act regardless of how the analysis reads.

The reason for the sequence is accountability. In employment law, fiduciary practice, regulated decisions, and bounded delegations of AI authority, the law and the rules of the profession impose a non-delegable duty on the human decision-maker. That duty does not move to a vendor, an algorithm, or a confidence score. The owner remains accountable regardless of what the AI produced. The veto check is the discipline that recognises which decisions sit in that accountability class before any analysis is consulted. A perfectly defensible recommendation in the wrong category is a liability, not a help.

Which four situations should always trigger the veto check?

Four situations trigger an automatic veto regardless of how confident the recommendation sounds. The first is high-stakes irreversible decisions, the one-way doors that cannot be undone cheaply. The second is decisions affecting individuals where employment, livelihood, benefits, or access to opportunity are in play. The third is regulated or fiduciary work where a non-delegable duty of care applies. The fourth is decisions outside the owner’s documented AI authority.

The first three categories are anchored in law and professional standards. The EU AI Act classifies employment AI systems as high-risk and requires documented human oversight. The Information Commissioner’s Office warns that human review must be meaningful, not rubber-stamping, otherwise the decision is treated as automated regardless of how many people signed off. A 2025 University of Washington study found human reviewers followed biased AI hiring recommendations roughly 90 per cent of the time even when they registered the bias. A Hangzhou intermediate court in 2026 ruled that a Chinese company could not dismiss a worker on AI-led grounds without independent just-cause analysis. The fourth category is governance, not law. The owner sets the boundary of where AI may recommend and where humans must reason independently, then enforces it as a rule rather than reopening the question each time.

How does the thirty-second check actually run?

The check is four questions, asked in order, applied before the recommendation’s reasoning is read. Is this decision difficult or impossible to reverse? Does it affect an individual’s employment, livelihood, benefits, or access to opportunity? Does it sit in a regulated or fiduciary context? Does it fall within the AI authority the owner has documented? Any answer pointing to veto territory routes the recommendation to reframe.

The order matters because of how readers process AI output. Reading a well-written analysis first creates a halo effect, the reviewer mentally accepts the recommendation while telling themselves they are being critical. A category-level check avoids that trap by running before the rhetoric is encountered. In practice a hiring manager faced with an AI recommendation to reject a candidate runs the four questions in under a minute. If the recommendation is shortlist ranking within delegated authority, analytical review proceeds. If the recommendation is a final hiring call affecting a protected characteristic, the veto fires and the recommendation is reframed before its reasoning is given weight.

What does it actually look like when the veto fires?

A fired veto does not mean the recommendation is rejected, it means it is reframed. The owner removes the AI output from the position of recommending and restates the underlying decision question in human terms. Take the dismissal example. The owner asks the question independently, does this employee’s performance, conduct, or cost-to-value justify dismissal under our policy and legal obligations, and reaches a position on her own.

Reframing usually surfaces factors the algorithm did not weigh. The Hangzhou court found that while productivity-to-cost ratios were poor, the company had not demonstrated that continued employment was impossible, nor offered fair process. A reframe surfaces those gaps because it forces the question into the legal and ethical frame the law actually uses. Sometimes the reframe demotes the AI output to a data point inside a wider option set. A recommendation against promoting a candidate based on pattern matching against past promotions becomes one input among three or four, with the owner applying independent judgment to which path is fair and fits the strategy. The AI analysis still informs, it just no longer recommends. Reframing restores the human as the decision-maker with analysis in a supporting role.

How does the veto check survive the owner not being in every conversation?

The check works at owner level only while the owner reviews every AI-influenced decision personally, which is not long. To scale, the discipline has to move from personal habit into documented process. Three elements make it stick. A published delegation policy listing what is in scope for AI recommendations and what is not. A four-question checklist embedded in the review workflow. A trained team that knows what reframing looks like in practice.

The shift from intuition to process is what protects the firm when the owner is on holiday, on a flight, or sitting in a different decision. Deloitte’s 2026 research on AI decision-making found that organisations performing best on AI governance establish clear decision rights and revisit them regularly as the firm’s use of AI changes. Singapore’s 2026 Model AI Governance Framework for Agentic AI treats least-privilege access and human approval checkpoints on sensitive actions as baseline governance. Veto authority is documented, applied at category level, and exercised at workflow speed by whoever is in the seat. That is what makes the discipline survive the owner not being in the room.

A veto check is the discipline that makes responsible AI adoption possible at SME scale. Many AI recommendations can be evaluated on their analytical merit and acted on if they hold up. The few that should not be acted on regardless of merit are the ones where careers, livelihoods, regulated duties, and reversibility are in play, and where the cost of a wrong call is asymmetric. Naming those categories in advance, and running the check at category level before the analysis is read, separates owners who use AI deliberately from owners who find themselves explaining to a regulator why a confident-sounding output got the better of them.

If you would like to think through where the veto categories should sit in your firm, book a conversation.

Sources

- Information Commissioner's Office (2024). Guidance on AI and data protection, on the meaningful-review requirement for AI-assisted decisions affecting individuals. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/how-do-we-ensure-individual-rights-in-our-ai-systems/ - European Commission (2024). EU AI Act Annex III, classifying employment AI systems as high-risk and requiring documented human oversight. https://artificialintelligenceact.eu/annex/3/ - European Commission (2024). EU AI Act Article 15, on accuracy and human-oversight obligations for high-risk AI systems. https://artificialintelligenceact.eu/article/15/ - University of Washington News (2025). People mirror AI systems' hiring biases even when they detect them, on why human review alone is not a sufficient safeguard. https://www.washington.edu/news/2025/11/10/people-mirror-ai-systems-hiring-biases-study-finds/ - HR Dive (2025). Human recruiters are willing to accept AI biases, summarising follow-up work on bias mirroring in hiring panels. https://www.hrdive.com/news/human-recruiters-perfectly-willing-accept-ai-biases/805585/ - El Pais (2026). A Chinese court sets limits on the dismissal of a worker replaced by AI, the Hangzhou intermediate court ruling on AI-driven termination. https://english.elpais.com/economy-and-business/2026-05-07/a-chinese-court-sets-limits-on-the-dismissal-of-a-worker-replaced-by-ai.html - Manatt Phelps and Phillips (2024). Reconciling AI opacity and advisers' fiduciary duties, on non-delegable duties of care in regulated advice. https://www.manatt.com/how-to-reconcile-ai-opacity-and-advisers-fiduciary-duties - MIT Sloan Management Review (2024). What humans lose when we let AI decide, on the distinction between algorithmic reckoning and human judgment in consequential decisions. https://sloanreview.mit.edu/article/what-humans-lose-when-we-let-ai-decide/ - Deloitte Insights (2026). Decision-making with AI, on organisations that establish clear decision rights and revisit them as their AI use matures. https://www.deloitte.com/us/en/insights/topics/talent/human-capital-trends/2026/decision-making-with-ai.html - Robins Kaplan (2025). Navigating AI and fiduciary duties, ten key questions for boards and fiduciaries about non-delegable AI-related responsibilities. https://www.robinskaplan.com/newsroom/insights/navigating-ai-and-fiduciary-duties-10-key-questions-for-your-organization

Frequently asked questions

How is this different from just disagreeing with an AI recommendation?

Disagreement is analytical, it argues the recommendation is wrong on its merits. The veto check is structural, it says some decision categories should not be made off an AI recommendation regardless of merit. The recommendation could be perfectly reasoned and still belong in a category where human judgment must be applied independently. The two moves work in sequence. Run the veto check first to decide whether the analysis is even the right thing to review.

Doesn't a strict veto on people decisions just slow the team down?

In practice it adds seconds, not minutes, because the four questions are pre-named and applied at category level. The slower outcome is when an owner skips the check, acts on a credible-sounding recommendation, and finds themselves in an employment tribunal or a regulator's inbox. The University of Washington research on hiring bias shows reviewers follow biased AI recommendations roughly 90 per cent of the time even when they register the bias, so the check is the safeguard, not the friction.

How do I embed the veto check across a team without it depending on me being present?

Three things make it stick. Publish a written delegation policy that names which decisions are in scope for AI recommendations and which are not. Add a four-question checklist to the workflow where recommendations are reviewed, so any flag routes the item to a designated reviewer rather than to automatic execution. Train the team on what reframing looks like in practice, so when a flag fires they restate the underlying question in human terms instead of arguing with the recommendation.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation