AI as a sparring partner for hard decisions

A founder at a kitchen table reading a long block of text on a laptop, with notes and a coffee mug beside her, in mid-morning light
TL;DR

AI works as a sparring partner for hard founder decisions when it is set against you, not for you. State the decision, state your lean, ask AI to argue the strongest case for the opposite and find the assumption your lean rests on. It works for trade-offs, where weighing matters. It does not work for values, where the answer is yours alone. Treat the output as a hypothesis to test, not a verdict to accept.

Key takeaways

- The bottleneck on most stuck founder decisions is judgement, not data. More research at this stage compounds the problem, AI as a sparring partner relieves it. - The sparring prompt is mechanical. State the decision, state your lean, ask the model to argue the strongest case for the opposite with specific examples of how the opposite has played out elsewhere. - Round two finds the assumption you have not tested. Ask the model to identify the single load-bearing belief your lean depends on, and what evidence would actually disconfirm it. - Sparring works for trade-offs and stops at values. Trade-offs are weighable. Values are not. Mistaking one for the other is the most common misuse of this practice. - Write the final decision in one sentence, save it with the date, note your confidence and the two things you might be wrong about. In six months that note is the most useful page in your week.

She has been turning the pricing decision over for nine days. There are three reasonable answers and a slight lean toward one of them, the one her gut keeps going back to even though she cannot quite say why. She has read two competitor sites, drafted a spreadsheet, and asked her partner over dinner. The lean has not moved. The decision has not moved either. What she actually needs is not another data point. She needs someone to argue the strongest case against the answer she is leaning toward, in writing, fast, without making it personal.

This is the part of founder decision-making that AI is unusually well-suited for, and it is a different use case from research, drafting, or summarisation. It is sparring. The job is not to give you the answer, it is to stress-test the one you are quietly already drifting toward. The discomfort of asking a model to argue against you is the practice, not a side effect to optimise away.

This post is part of the AI for your own work cluster, and sits next to pre-mortems with AI, the same lineage applied to a different stage. If you are using sparring to set a price specifically, pricing a new offer with AI is the worked example.

What is AI as a sparring partner?

AI as a sparring partner is the practice of using a large language model to argue the strongest case against a decision you are about to make. You give it the decision, your lean, and the context. You ask it to attack hard, with specific examples and named failure mechanisms. You read the counter-case, treat it as a hypothesis to test, and decide. The model plays the awkward voice nobody else will.

The practice rests on decision science that predates AI by decades. Daniel Kahneman’s work on System 1 and System 2 thinking shows that founders under operational load default to fast, intuitive judgements that feel certain and are often wrong. Gary Klein’s pre-mortem technique, published in Harvard Business Review in 2007, asks teams to imagine a project has already failed and reason backwards. Charlan Nemeth’s research on dissent shows that authentic disagreement improves decisions and performed disagreement often makes them worse. What AI offers is the rigorous counter-case on demand, at 6:30 on a Tuesday, when the people whose dissent would be authentic are not available.

Why does sparring beat more research at this stage?

The gap is judgement, not data. By the time a decision has been turning over for a week, more information rarely shifts it. Confirmation bias, in Raymond Nickerson’s comprehensive review, ensures new evidence gets read through the lens of the lean you already have. Motivated reasoning, in Ziva Kunda’s framing, means the brain is constructing justifications below conscious thought. A fourth competitor analysis interrupts neither. A forceful counter-argument with specific examples sometimes does.

Recent Harvard Business Review work on how leaders use AI for strategic advice flagged a related trap. AI tends to default to compromise recommendations, splitting the difference between options when the real question is which option to pick. The researchers landed on a simple discipline, ask the model to argue against your position and require concrete examples before acting on what comes back. That is the sparring posture, and it is the use case that pays back hardest for the decisions that actually matter.

What does the sparring prompt actually look like?

It has three parts and they need to land in order. State the decision precisely, including the constraints. State your current lean and why you think you hold it. Ask the model to argue the strongest case for the opposite with specific examples and the mechanisms that would make those examples relevant to your situation. The specificity is what separates a useful counter-case from a generic risk register.

A worked example. A services-firm founder in South London is choosing between expanding into commercial work over eighteen months or putting the same capital into digital scheduling for the existing residential business. Her lean is expansion, because the margins look better and the contracts are longer. The sparring prompt: “I run a residential services firm with three teams. I am choosing between expanding into commercial work over eighteen months, or investing the capital in digital scheduling. My lean is expansion. Argue the strongest case against expansion, with five specific mechanisms by which similar firms have failed when making this move, and explain how each mechanism would actually play out in my situation.”

The model will produce a counter-case. Some of it will be generic. Some of it, if the prompt was specific, will land. The bit that lands is the one to investigate. Round two of the sparring is where the real work happens, ask the model to identify the single load-bearing assumption your lean depends on, and what evidence would actually disconfirm it. That sentence, “what would have to be true for me to be wrong”, is often the one nobody around the table has asked aloud.

When does sparring work, and when should you not use it?

It works for trade-offs and it stops at values. Trade-offs are decisions where two reasonable options can be weighed on common axes, expansion versus consolidation, hire versus contract, raise prices versus protect volume. The model can compare, stress-test, and rank. Roger Martin’s integrative-thinking framework calls this holding two opposing ideas in productive tension, and the AI partner keeps the tension live rather than collapsing it too early into a binary.

A values question is a different shape, a line you have decided you will not cross. Richard Rumelt’s discipline of articulating the diagnosis before choosing the policy is the upstream move that keeps the two apart. Whether to take on a client whose work conflicts with your stated principles is not something to spar over. The model will produce a fluent argument either way, because text is what it makes, and the fluency will be persuasive. Reach for a coach, a partner, or a long walk for that kind of question instead.

A second flag, the MIT Sloan research on AI-supported decision-making found that the tool amplifies your existing decision style rather than overriding it. If your style at 11 p.m. on a Sunday is to confirm what you already feel, the model will perform that confirmation with great fluency. The sparring prompt earns its keep in the morning, after coffee, with the decision written out longhand on a single page first.

What should you do with the output once you have it?

Treat it as a hypothesis to test, not a verdict to accept. Pick the strongest argument the model produced and spend thirty minutes checking the example it cited. If the example holds up under research, the argument is real and you have to weigh it. If the example dissolves, the model was performing dissent rather than carrying it. Tobi Lütke at Shopify frames the posture as refusing to accept the first working solution.

Once you have decided, write the final decision in one sentence with the date, your confidence, and the two things you might be wrong about. Philip Tetlock’s forecasting research shows that writing predictions down before the outcome is known is the single habit that reliably improves judgement over time. In six months, when the result is in, that note tells you, specifically, which of your instincts are calibrated and which are not. That is the input you cannot get from any model.

If you would like to talk through where the sparring practice fits in your decision rhythm, and where it does not, book a conversation.

Sources

- Daniel Kahneman (2011). "Thinking, Fast and Slow". Cited as the foundation for the System 1 / System 2 framing under the post's claim that founders default to intuition under operational load. https://dn790002.ca.archive.org/0/items/DanielKahnemanThinkingFastAndSlow/Daniel%20Kahneman-Thinking,%20Fast%20and%20Slow%20%20.pdf - Daniel Kahneman and Amos Tversky (1974). "Judgment under Uncertainty: Heuristics and Biases", Science 185(4157). The original paper on representativeness, availability, and anchoring, the predictable distortions sparring is designed to surface. https://www.science.org/doi/10.1126/science.185.4157.1124 - Daniel Kahneman and Gary Klein (2009). "Conditions for Intuitive Expertise: A Failure to Disagree", American Psychologist. The reference for when intuitive judgement can be trusted, low-regularity strategic choices fail the test. https://www.semanticscholar.org/paper/Conditions-for-intuitive-expertise%3A-a-failure-to-Kahneman-Klein/f1a5fb0c4b9703b3213bc3bd2dfe1f79ee35d511 - Gary Klein (2007). "Performing a Project Premortem", Harvard Business Review. The pre-mortem technique cited as the working ancestor of the sparring practice. https://hbr.org/2007/09/performing-a-project-premortem - Charlan Nemeth (2012). "The Psychological Basis of Quality Decision Making", IRLE working paper. The reference under the claim that authentic dissent improves decisions and that performed dissent often does not. https://irle.berkeley.edu/wp-content/uploads/2012/08/The-Psychological-Basis-of-Quality-Decision-Making.pdf - Irving Janis (1972). "Victims of Groupthink". The reference under the post's argument that cohesive teams degrade strategic judgement, the gap AI sparring is positioned to fill. https://www.afirstlook.com/docs/groupthink.pdf - Roger Martin (2007). "How Successful Leaders Think", Harvard Business Review. The integrative-thinking reference for holding two opposing options in productive tension rather than collapsing into binary choice. https://hbr.org/2007/06/how-successful-leaders-think - Richard Rumelt (2011). "Good Strategy, Bad Strategy". The diagnosis-policy-action kernel cited under the discipline of stating the decision precisely before sparring begins. https://therightquestions.co/book-review-of-a-top-book-on-strategy/ - MIT Sloan Management Review (2023). "The Human Factor in AI-Based Decision Making". The reference under the claim that AI amplifies an operator's existing decision style rather than overriding it. https://sloanreview.mit.edu/article/the-human-factor-in-ai-based-decision-making/ - First Round Review (2025). "Inside Shopify's AI Operating System with Tobi Lutke". Cited for the operator-side practice of stress-testing the first working solution rather than accepting it. https://www.firstround.com/ai/shopify

Frequently asked questions

Does this not just produce a confident counter-argument that talks me out of a good decision?

It can, if you treat the output as a verdict. It is a hypothesis. Large language models generate fluent, plausible-sounding counter-cases that may or may not be factually correct, the Harvard Business Review's 2025 work on AI advice flags this directly. The fix is to ask for specific examples, then check the strongest one. If the example holds up under thirty minutes of research, the argument is real. If it dissolves, you have learned the model was performing dissent rather than carrying it.

Why not just ask a trusted advisor or my board?

Do both. The research on group decision-making is unflattering, Irving Janis on groupthink and Charlan Nemeth on dissent both show that cohesive teams make worse decisions when nobody plays the awkward role. AI fills a different gap, it is available at 6:30 on a Tuesday, it argues hard without ego, and it does not need protecting from you. Use it to surface the case before you bring it to the people whose answer you actually need.

What kinds of decisions does this not help with?

Anything that is genuinely a values question, what kind of business you want, who you are willing to disappoint, what you will not trade for growth. Sparring requires options that can be weighed against each other. A values question does not have weighable sides, it has a line you either cross or do not. Asking AI to argue the strongest case for crossing it produces text, not insight. Reach for a friend, a coach, a partner, or a quiet weekend instead.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation