Pre-mortems with AI before you commit

A founder reading a long block of text on her laptop at her kitchen table in the evening, with a printed term sheet and pencil notes beside her
TL;DR

A pre-mortem asks you to imagine the project failed twelve months from now and write the history of why. Gary Klein's 1989 research showed this prospective-hindsight framing produces around 30 per cent more reasons for a given outcome than forward forecasting. AI is unusually good at this because it does not share your optimism or your advisors' alignment incentives. Three prompts, two hours, one honest risk register.

Key takeaways

- A pre-mortem is the cheapest insurance policy left when you are inside 24 hours of a major commitment. The 1989 Mitchell, Russo and Pennington research and Klein's 2007 HBR write-up both show that imagining failure as a completed fact surfaces around 30 per cent more causal explanations than forward-looking risk lists. - AI sits well in this seat because it does not share your optimism, your advisors' alignment incentives, or your team's reluctance to disagree with the founder in the room. Used as a structured devil's advocate, it generates failure narratives in minutes rather than days. - The prompt sequence has three turns, not one. Turn one is the failure narrative. Turn two asks for the failure mode you are least prepared for. Turn three forces the AI to argue against its own list. Single-turn pre-mortems produce generic risks; the third turn is where the work happens. - The judgement step is yours. Sort the AI's failure modes into three buckets: real risks (mitigate before signing), paranoia (let go), and walk-away triggers (kill criteria written down before the meeting). Without kill criteria in writing you will sign. - This is not a SWOT replacement so much as a SWOT antidote. SWOTs are tidy, balanced, and consensus-friendly. Pre-mortems are uneven, uncomfortable, and honest. The discomfort is the signal, not the bug.

She is twenty-four hours away from signing a £600k partnership commitment, and her gut is not entirely sure. Her lawyer has redlined the contract. Her accountant has pressure-tested the numbers. Her board chair has nodded the deal through. The pitch deck from the partner is good. The strategic logic is good. Everything in the room is pointing at sign.

The only thing left in the room that has not lined up behind the deal is her. And the AI on her laptop, if she chooses to use it that way.

What is a pre-mortem with AI?

A pre-mortem is a deliberately uncomfortable thinking move. You imagine the commitment has been signed, twelve months have passed, and the project has failed badly. From that future point you write the history of why. Done with AI as the second voice, you draft a one-page failure narrative in minutes, then push for the failure mode you are least prepared for. Three turns, two hours, one honest risk register.

The methodology is older than the AI part. Deborah Mitchell, Edward Russo and Nancy Pennington published the founding research in 1989, showing that imagining a future event as a completed certainty generates around 30 per cent more causal reasons than treating the same event as a probability. Gary Klein turned that finding into a working protocol, written up in Harvard Business Review in 2007. Daniel Kahneman has called it the single most useful debiasing tool he knows for organisational decisions. Amy Edmondson’s research on psychological safety explains why it works in a room: a structured exercise that legitimises dissent breaks the consensus pressure that suppresses doubt at every other commitment-stage meeting.

What AI adds is speed and a second perspective that does not share your optimism. A founder with 24 hours has no time to convene a workshop. The model gives you something close enough, fast enough, if you frame the prompts correctly.

Why does this matter for your business?

Because the decisions that hurt owner-operators most are the ones the room agreed on. Sydney Finkelstein’s Tuck research on executive failure walks through case after case where the formal risk process existed and was followed, and the organisation still committed to the wrong move because momentum and consensus overrode it. Iridium, AOL/Time Warner, and Patisserie Valerie all had blue-chip advisers on every side of the table. None of that prevented the commitment from being signed.

The mechanism is not stupidity. “Advisors all approved” feels like risk has been handled, when what has actually happened is that everyone in the room shares a financial or social interest in the deal closing. Your lawyer bills more if it signs. Your corporate finance adviser bills more if it signs. Your board may have promised this number to investors. The pre-mortem inserts a voice with no skin in the deal, asking the only question the room is structurally bad at asking.

For an owner-operator of a UK services SME, the asymmetry is sharper still. A FTSE 250 board can absorb a bad partnership and survive. A 12-person consultancy that signs a five-year exclusive committing 30 per cent of revenue to one partner cannot. The cost of getting one of these wrong is years of capital, attention, and reversibility you no longer have. The cost of the pre-mortem is two hours.

Where will you actually meet a pre-mortem with AI?

In four quietly recurring places, and one obvious one. The obvious one is the major commitment itself: the partnership signing, the equity round, the lease on a second office, the long-tail technology integration. The four quieter places are where the discipline compounds, because using it small means the muscle is already warm when the big call lands.

The first quiet place is the irreversible hire. Senior hires above £80k carry roughly the same one-way-door character as a small partnership. Run the pre-mortem on what twelve months of underperformance looks like and what you will have failed to ask at interview. The second is the new offer or pricing change before it goes to market. The third is any technology platform commitment longer than a year, where lock-in costs only become visible once you have trained the team on it. The fourth is the strategic pivot, the one where you have decided in your head but have not said out loud yet. The pre-mortem is the structured way of saying it out loud to yourself.

The prompt sequence is the same in all five. State the commitment with specificity, including financial scale, term length, and what you are giving up to do it. Ask the model to write the failure narrative as if it has already happened. In the second turn, ask explicitly for the failure mode it thinks you are least prepared for, by name. In the third turn, ask the model to argue against its own list and identify which of its own scenarios are weakest. The third turn is what stops the exercise becoming theatre. The companion piece, personal post-mortems with AI, runs the same machinery after the fact, on a project that has actually finished. They are the same lineage from opposite ends.

When should you ask for one and when should you ignore it?

Ask for one whenever the decision is one-way through a door that closes behind you. Multi-year exclusivity, equity dilution, senior hire, debt covenant, technology lock-in, public position-taking. Jeff Bezos’s one-way door versus two-way door test holds well here. If reversing the decision means breaking a contract, paying out a settlement, or losing a year, run one. If you can change your mind in a week without writing anyone a cheque, just decide and adjust.

Ignore the pre-mortem when the decision is genuinely two-way and the cost of the deliberation exceeds the cost of recovery. Ignore it when you are pre-committed for non-business reasons (a family obligation, a moral one, a reputational one) and the analysis is rationalisation theatre. Ignore it when you have already run one this week on the same call and are reaching for it a second time to delay signing. The discipline is a tool, not a stalling pattern.

The structural risk worth flagging is overconfidence in the AI’s output. Ethan Mollick’s writing on language models is sharp here: AI output optimises for plausibility and coherence, not accuracy. A failure narrative that reads beautifully may be generic and miss the actual risk in your situation. Harvard Business School research on AI-assisted entrepreneurship found strong operators got 10 to 15 per cent better outcomes with AI help, while weaker operators got worse outcomes, because they could not tell which AI suggestions to keep and which to discard. The pre-mortem only works if you bring the same critical eye to the AI’s failure list that you would to a junior consultant’s.

The closest sibling is the personal post-mortem, the same machinery applied after a project has run rather than before it commits. Klein’s 2007 HBR write-up is still the cleanest read on the methodology lineage. Kahneman’s chapter in Thinking, Fast and Slow on the planning fallacy gives you the cognitive-bias backdrop, and Amy Edmondson’s work on psychological safety explains the team dynamics that pre-mortems formalise.

Within the cluster, the natural next step is What would happen if I didn’t stress-test this?, which lives in the Eliminate quadrant of the EAD-Do framework and asks the cheaper version of the same question: should this commitment exist at all, before we even pre-mortem it. The cluster opener, AI for your own work, not just your business, sits one level up.

If your week has a £600k decision in it, the cheapest two hours you will spend this month are the two you spend on the pre-mortem. If it has a £20k decision in it, run a 30-minute version. The discipline scales. The cost of skipping it does not.

If you would like a peer to walk through the pre-mortem with you on a current decision, book a conversation.

Sources

- Klein, Gary (2007). "Performing a Project Premortem", Harvard Business Review. The canonical write-up of the methodology, including the 30-minute team protocol, the silent-generation phase, and the "imagine the project has failed" framing. Cited as the methodology lineage source. https://hbr.org/2007/09/performing-a-project-premortem - Mitchell, Deborah, Russo, J. Edward, and Pennington, Nancy (1989). "Back to the future: Temporal perspective in the explanation of events", Journal of Behavioral Decision Making. The seminal prospective-hindsight study showing imagined-as-certain outcomes generate around 30 per cent more causal explanations. Cited as the empirical foundation underneath Klein's protocol. https://onlinelibrary.wiley.com/doi/10.1002/bdm.3960020103 - Veinott, Beth, Klein, Gary, and Wiggins, Sterling (2010). "Evaluating the effectiveness of the PreMortem technique on plan confidence", Proceedings of the 7th International ISCRAM Conference. Compared pre-mortem against Pros/Cons and Cons-only methods, finding roughly twice the overconfidence reduction. Cited as the comparative-evidence anchor. https://idl.iscram.org/files/veinott/2010/1147_Veinott_etal2010.pdf - Kahneman, Daniel (2011). Thinking, Fast and Slow, Farrar, Straus and Giroux. Endorses Klein's pre-mortem as the single most useful debiasing tool for organisational decisions, framed against the planning fallacy and inside-view bias. Cited as the senior-academic endorsement and the planning-fallacy reference. https://us.macmillan.com/books/9780374533557/thinkingfastandslow - Edmondson, Amy at Harvard Business School (2018). The Fearless Organization, Wiley. Documents how psychological safety determines whether dissent is voiced or suppressed in commitment-stage decisions, the mechanism pre-mortems formalise. Cited as the team-dynamics anchor. https://www.wiley.com/en-gb/The+Fearless+Organization%3A+Creating+Psychological+Safety+in+the+Workplace+for+Learning%2C+Innovation%2C+and+Growth-p-9781119477242 - Foster, Wade at Zapier (2025). Lenny's Newsletter "How I AI" episode, on structuring AI-augmented workflows where the human stays in an actively adversarial role rather than passively accepting model output. Cited as the practitioner-precedent for AI-as-dissenter discipline. https://www.lennysnewsletter.com/p/zapiers-ceo-shares-his-personal-ai-stack - Mollick, Ethan at Wharton (2024). "Thinking like an AI", One Useful Thing. Argues that AI output optimises for coherence and plausibility rather than accuracy, which is why human filtering of AI risk lists is the load-bearing step in any pre-mortem. Cited as the AI-limitation evidence. https://www.oneusefulthing.org/p/thinking-like-an-ai - Finkelstein, Sydney at Tuck Dartmouth (2003). Why Smart Executives Fail, Portfolio. The Iridium and AOL/Time Warner case studies showing how organisational momentum overrides risk processes at the commitment gate. Cited as the case-evidence anchor for "advisors all approved" failures. https://mba.tuck.dartmouth.edu/pages/faculty/syd.finkelstein/articles/Iridium.pdf - Corporate Governance Institute (2018). Patisserie Valerie case study on auditor and board governance failure, including the £13m executive share sales preceding the collapse. Cited as the UK-context reference for consensus-bias governance failure. https://www.thecorporategovernanceinstitute.com/insights/case-studies/what-was-the-patisserie-valerie-controversy/ - KPMG UK (2025). Due diligence services for UK SME and mid-market transactions, including the operational, financial, legal, technology, and HR dimensions of pre-commitment review. Cited as the UK-practitioner reference for what formal due diligence does and does not catch. https://kpmg.com/uk/en/services/deal-advisory/due-diligence-strategy.html

Frequently asked questions

Will my AI tool just tell me what I want to hear?

It will if you let it. Default chatbot behaviour leans agreeable, especially when the user has framed the situation positively. The fix is mechanical. Frame the prompt as a completed failure, not a risk assessment. Ask for the failure mode you are least prepared for, by name. Then ask the model to argue against its own list and find the weakest scenarios. Three turns is the minimum. One turn produces flattery dressed as analysis.

How long does this actually take when I am genuinely time-pressured?

Around two hours of focused time, which compresses comfortably into a 24-hour window before signing. Phase one is roughly 30 minutes to write down exactly what you are committing to (term length, exclusivity, minimum spend, exit clauses). Phase two is 60 minutes with the AI across the three turns. Phase three is 30 minutes consolidating the output into a written risk register and your kill criteria. If you have less than two hours, you have less than enough.

Is this not just a SWOT analysis with extra steps?

No, the framing is materially different. SWOT asks you to balance four quadrants, which encourages you to find a strength to offset every weakness and an opportunity to soften every threat. Pre-mortems start from "this failed" as a fact, then work backwards. Klein's lineage is the cognitive psychology of prospective hindsight, not strategic planning, and the Veinott team's 2010 research found pre-mortems reduced overconfidence about twice as much as Pros/Cons-style methods.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation