An owner sits with a recommendation her AI tool produced ten minutes ago, advising her to let a long-serving employee go. The numbers stack up. The reasoning is measured. The output cites cost-to-output ratios, productivity trends, and a fair-process script for the conversation. She reads it twice. The recommendation does not feel wrong. It feels too easy to act on, and she cannot work out whether that feeling is wisdom or hesitation.
This post is for owners in that moment. The discipline that helps is to recognise that some decisions belong in a category where the analysis quality is the second question, not the first. A veto check is what runs before the analysis is read. It takes thirty seconds, sits on principle rather than intuition, and it protects an owner from credible-sounding recommendations that should never have been actioned regardless of how well they read.
Why is a veto check a separate move from challenging the analysis?
The veto check asks a different question from the analytical review. The analytical review asks whether the recommendation is sound. The veto asks whether this category of decision should be made on the back of an AI recommendation at all. A recommendation can be analytically clean and still sit in a category where the owner should decline to act regardless of how the analysis reads.
The reason for the sequence is accountability. In employment law, fiduciary practice, regulated decisions, and bounded delegations of AI authority, the law and the rules of the profession impose a non-delegable duty on the human decision-maker. That duty does not move to a vendor, an algorithm, or a confidence score. The owner remains accountable regardless of what the AI produced. The veto check is the discipline that recognises which decisions sit in that accountability class before any analysis is consulted. A perfectly defensible recommendation in the wrong category is a liability, not a help.
Which four situations should always trigger the veto check?
Four situations trigger an automatic veto regardless of how confident the recommendation sounds. The first is high-stakes irreversible decisions, the one-way doors that cannot be undone cheaply. The second is decisions affecting individuals where employment, livelihood, benefits, or access to opportunity are in play. The third is regulated or fiduciary work where a non-delegable duty of care applies. The fourth is decisions outside the owner’s documented AI authority.
The first three categories are anchored in law and professional standards. The EU AI Act classifies employment AI systems as high-risk and requires documented human oversight. The Information Commissioner’s Office warns that human review must be meaningful, not rubber-stamping, otherwise the decision is treated as automated regardless of how many people signed off. A 2025 University of Washington study found human reviewers followed biased AI hiring recommendations roughly 90 per cent of the time even when they registered the bias. A Hangzhou intermediate court in 2026 ruled that a Chinese company could not dismiss a worker on AI-led grounds without independent just-cause analysis. The fourth category is governance, not law. The owner sets the boundary of where AI may recommend and where humans must reason independently, then enforces it as a rule rather than reopening the question each time.
How does the thirty-second check actually run?
The check is four questions, asked in order, applied before the recommendation’s reasoning is read. Is this decision difficult or impossible to reverse? Does it affect an individual’s employment, livelihood, benefits, or access to opportunity? Does it sit in a regulated or fiduciary context? Does it fall within the AI authority the owner has documented? Any answer pointing to veto territory routes the recommendation to reframe.
The order matters because of how readers process AI output. Reading a well-written analysis first creates a halo effect, the reviewer mentally accepts the recommendation while telling themselves they are being critical. A category-level check avoids that trap by running before the rhetoric is encountered. In practice a hiring manager faced with an AI recommendation to reject a candidate runs the four questions in under a minute. If the recommendation is shortlist ranking within delegated authority, analytical review proceeds. If the recommendation is a final hiring call affecting a protected characteristic, the veto fires and the recommendation is reframed before its reasoning is given weight.
What does it actually look like when the veto fires?
A fired veto does not mean the recommendation is rejected, it means it is reframed. The owner removes the AI output from the position of recommending and restates the underlying decision question in human terms. Take the dismissal example. The owner asks the question independently, does this employee’s performance, conduct, or cost-to-value justify dismissal under our policy and legal obligations, and reaches a position on her own.
Reframing usually surfaces factors the algorithm did not weigh. The Hangzhou court found that while productivity-to-cost ratios were poor, the company had not demonstrated that continued employment was impossible, nor offered fair process. A reframe surfaces those gaps because it forces the question into the legal and ethical frame the law actually uses. Sometimes the reframe demotes the AI output to a data point inside a wider option set. A recommendation against promoting a candidate based on pattern matching against past promotions becomes one input among three or four, with the owner applying independent judgment to which path is fair and fits the strategy. The AI analysis still informs, it just no longer recommends. Reframing restores the human as the decision-maker with analysis in a supporting role.
How does the veto check survive the owner not being in every conversation?
The check works at owner level only while the owner reviews every AI-influenced decision personally, which is not long. To scale, the discipline has to move from personal habit into documented process. Three elements make it stick. A published delegation policy listing what is in scope for AI recommendations and what is not. A four-question checklist embedded in the review workflow. A trained team that knows what reframing looks like in practice.
The shift from intuition to process is what protects the firm when the owner is on holiday, on a flight, or sitting in a different decision. Deloitte’s 2026 research on AI decision-making found that organisations performing best on AI governance establish clear decision rights and revisit them regularly as the firm’s use of AI changes. Singapore’s 2026 Model AI Governance Framework for Agentic AI treats least-privilege access and human approval checkpoints on sensitive actions as baseline governance. Veto authority is documented, applied at category level, and exercised at workflow speed by whoever is in the seat. That is what makes the discipline survive the owner not being in the room.
A veto check is the discipline that makes responsible AI adoption possible at SME scale. Many AI recommendations can be evaluated on their analytical merit and acted on if they hold up. The few that should not be acted on regardless of merit are the ones where careers, livelihoods, regulated duties, and reversibility are in play, and where the cost of a wrong call is asymmetric. Naming those categories in advance, and running the check at category level before the analysis is read, separates owners who use AI deliberately from owners who find themselves explaining to a regulator why a confident-sounding output got the better of them.
If you would like to think through where the veto categories should sit in your firm, book a conversation.



