She is twenty-four hours away from signing a £600k partnership commitment, and her gut is not entirely sure. Her lawyer has redlined the contract. Her accountant has pressure-tested the numbers. Her board chair has nodded the deal through. The pitch deck from the partner is good. The strategic logic is good. Everything in the room is pointing at sign.
The only thing left in the room that has not lined up behind the deal is her. And the AI on her laptop, if she chooses to use it that way.
What is a pre-mortem with AI?
A pre-mortem is a deliberately uncomfortable thinking move. You imagine the commitment has been signed, twelve months have passed, and the project has failed badly. From that future point you write the history of why. Done with AI as the second voice, you draft a one-page failure narrative in minutes, then push for the failure mode you are least prepared for. Three turns, two hours, one honest risk register.
The methodology is older than the AI part. Deborah Mitchell, Edward Russo and Nancy Pennington published the founding research in 1989, showing that imagining a future event as a completed certainty generates around 30 per cent more causal reasons than treating the same event as a probability. Gary Klein turned that finding into a working protocol, written up in Harvard Business Review in 2007. Daniel Kahneman has called it the single most useful debiasing tool he knows for organisational decisions. Amy Edmondson’s research on psychological safety explains why it works in a room: a structured exercise that legitimises dissent breaks the consensus pressure that suppresses doubt at every other commitment-stage meeting.
What AI adds is speed and a second perspective that does not share your optimism. A founder with 24 hours has no time to convene a workshop. The model gives you something close enough, fast enough, if you frame the prompts correctly.
Why does this matter for your business?
Because the decisions that hurt owner-operators most are the ones the room agreed on. Sydney Finkelstein’s Tuck research on executive failure walks through case after case where the formal risk process existed and was followed, and the organisation still committed to the wrong move because momentum and consensus overrode it. Iridium, AOL/Time Warner, and Patisserie Valerie all had blue-chip advisers on every side of the table. None of that prevented the commitment from being signed.
The mechanism is not stupidity. “Advisors all approved” feels like risk has been handled, when what has actually happened is that everyone in the room shares a financial or social interest in the deal closing. Your lawyer bills more if it signs. Your corporate finance adviser bills more if it signs. Your board may have promised this number to investors. The pre-mortem inserts a voice with no skin in the deal, asking the only question the room is structurally bad at asking.
For an owner-operator of a UK services SME, the asymmetry is sharper still. A FTSE 250 board can absorb a bad partnership and survive. A 12-person consultancy that signs a five-year exclusive committing 30 per cent of revenue to one partner cannot. The cost of getting one of these wrong is years of capital, attention, and reversibility you no longer have. The cost of the pre-mortem is two hours.
Where will you actually meet a pre-mortem with AI?
In four quietly recurring places, and one obvious one. The obvious one is the major commitment itself: the partnership signing, the equity round, the lease on a second office, the long-tail technology integration. The four quieter places are where the discipline compounds, because using it small means the muscle is already warm when the big call lands.
The first quiet place is the irreversible hire. Senior hires above £80k carry roughly the same one-way-door character as a small partnership. Run the pre-mortem on what twelve months of underperformance looks like and what you will have failed to ask at interview. The second is the new offer or pricing change before it goes to market. The third is any technology platform commitment longer than a year, where lock-in costs only become visible once you have trained the team on it. The fourth is the strategic pivot, the one where you have decided in your head but have not said out loud yet. The pre-mortem is the structured way of saying it out loud to yourself.
The prompt sequence is the same in all five. State the commitment with specificity, including financial scale, term length, and what you are giving up to do it. Ask the model to write the failure narrative as if it has already happened. In the second turn, ask explicitly for the failure mode it thinks you are least prepared for, by name. In the third turn, ask the model to argue against its own list and identify which of its own scenarios are weakest. The third turn is what stops the exercise becoming theatre. The companion piece, personal post-mortems with AI, runs the same machinery after the fact, on a project that has actually finished. They are the same lineage from opposite ends.
When should you ask for one and when should you ignore it?
Ask for one whenever the decision is one-way through a door that closes behind you. Multi-year exclusivity, equity dilution, senior hire, debt covenant, technology lock-in, public position-taking. Jeff Bezos’s one-way door versus two-way door test holds well here. If reversing the decision means breaking a contract, paying out a settlement, or losing a year, run one. If you can change your mind in a week without writing anyone a cheque, just decide and adjust.
Ignore the pre-mortem when the decision is genuinely two-way and the cost of the deliberation exceeds the cost of recovery. Ignore it when you are pre-committed for non-business reasons (a family obligation, a moral one, a reputational one) and the analysis is rationalisation theatre. Ignore it when you have already run one this week on the same call and are reaching for it a second time to delay signing. The discipline is a tool, not a stalling pattern.
The structural risk worth flagging is overconfidence in the AI’s output. Ethan Mollick’s writing on language models is sharp here: AI output optimises for plausibility and coherence, not accuracy. A failure narrative that reads beautifully may be generic and miss the actual risk in your situation. Harvard Business School research on AI-assisted entrepreneurship found strong operators got 10 to 15 per cent better outcomes with AI help, while weaker operators got worse outcomes, because they could not tell which AI suggestions to keep and which to discard. The pre-mortem only works if you bring the same critical eye to the AI’s failure list that you would to a junior consultant’s.
Related concepts and where to read next
The closest sibling is the personal post-mortem, the same machinery applied after a project has run rather than before it commits. Klein’s 2007 HBR write-up is still the cleanest read on the methodology lineage. Kahneman’s chapter in Thinking, Fast and Slow on the planning fallacy gives you the cognitive-bias backdrop, and Amy Edmondson’s work on psychological safety explains the team dynamics that pre-mortems formalise.
Within the cluster, the natural next step is What would happen if I didn’t stress-test this?, which lives in the Eliminate quadrant of the EAD-Do framework and asks the cheaper version of the same question: should this commitment exist at all, before we even pre-mortem it. The cluster opener, AI for your own work, not just your business, sits one level up.
If your week has a £600k decision in it, the cheapest two hours you will spend this month are the two you spend on the pre-mortem. If it has a £20k decision in it, run a 30-minute version. The discipline scales. The cost of skipping it does not.
If you would like a peer to walk through the pre-mortem with you on a current decision, book a conversation.



