She has been turning the pricing decision over for nine days. There are three reasonable answers and a slight lean toward one of them, the one her gut keeps going back to even though she cannot quite say why. She has read two competitor sites, drafted a spreadsheet, and asked her partner over dinner. The lean has not moved. The decision has not moved either. What she actually needs is not another data point. She needs someone to argue the strongest case against the answer she is leaning toward, in writing, fast, without making it personal.
This is the part of founder decision-making that AI is unusually well-suited for, and it is a different use case from research, drafting, or summarisation. It is sparring. The job is not to give you the answer, it is to stress-test the one you are quietly already drifting toward. The discomfort of asking a model to argue against you is the practice, not a side effect to optimise away.
This post is part of the AI for your own work cluster, and sits next to pre-mortems with AI, the same lineage applied to a different stage. If you are using sparring to set a price specifically, pricing a new offer with AI is the worked example.
What is AI as a sparring partner?
AI as a sparring partner is the practice of using a large language model to argue the strongest case against a decision you are about to make. You give it the decision, your lean, and the context. You ask it to attack hard, with specific examples and named failure mechanisms. You read the counter-case, treat it as a hypothesis to test, and decide. The model plays the awkward voice nobody else will.
The practice rests on decision science that predates AI by decades. Daniel Kahneman’s work on System 1 and System 2 thinking shows that founders under operational load default to fast, intuitive judgements that feel certain and are often wrong. Gary Klein’s pre-mortem technique, published in Harvard Business Review in 2007, asks teams to imagine a project has already failed and reason backwards. Charlan Nemeth’s research on dissent shows that authentic disagreement improves decisions and performed disagreement often makes them worse. What AI offers is the rigorous counter-case on demand, at 6:30 on a Tuesday, when the people whose dissent would be authentic are not available.
Why does sparring beat more research at this stage?
The gap is judgement, not data. By the time a decision has been turning over for a week, more information rarely shifts it. Confirmation bias, in Raymond Nickerson’s comprehensive review, ensures new evidence gets read through the lens of the lean you already have. Motivated reasoning, in Ziva Kunda’s framing, means the brain is constructing justifications below conscious thought. A fourth competitor analysis interrupts neither. A forceful counter-argument with specific examples sometimes does.
Recent Harvard Business Review work on how leaders use AI for strategic advice flagged a related trap. AI tends to default to compromise recommendations, splitting the difference between options when the real question is which option to pick. The researchers landed on a simple discipline, ask the model to argue against your position and require concrete examples before acting on what comes back. That is the sparring posture, and it is the use case that pays back hardest for the decisions that actually matter.
What does the sparring prompt actually look like?
It has three parts and they need to land in order. State the decision precisely, including the constraints. State your current lean and why you think you hold it. Ask the model to argue the strongest case for the opposite with specific examples and the mechanisms that would make those examples relevant to your situation. The specificity is what separates a useful counter-case from a generic risk register.
A worked example. A services-firm founder in South London is choosing between expanding into commercial work over eighteen months or putting the same capital into digital scheduling for the existing residential business. Her lean is expansion, because the margins look better and the contracts are longer. The sparring prompt: “I run a residential services firm with three teams. I am choosing between expanding into commercial work over eighteen months, or investing the capital in digital scheduling. My lean is expansion. Argue the strongest case against expansion, with five specific mechanisms by which similar firms have failed when making this move, and explain how each mechanism would actually play out in my situation.”
The model will produce a counter-case. Some of it will be generic. Some of it, if the prompt was specific, will land. The bit that lands is the one to investigate. Round two of the sparring is where the real work happens, ask the model to identify the single load-bearing assumption your lean depends on, and what evidence would actually disconfirm it. That sentence, “what would have to be true for me to be wrong”, is often the one nobody around the table has asked aloud.
When does sparring work, and when should you not use it?
It works for trade-offs and it stops at values. Trade-offs are decisions where two reasonable options can be weighed on common axes, expansion versus consolidation, hire versus contract, raise prices versus protect volume. The model can compare, stress-test, and rank. Roger Martin’s integrative-thinking framework calls this holding two opposing ideas in productive tension, and the AI partner keeps the tension live rather than collapsing it too early into a binary.
A values question is a different shape, a line you have decided you will not cross. Richard Rumelt’s discipline of articulating the diagnosis before choosing the policy is the upstream move that keeps the two apart. Whether to take on a client whose work conflicts with your stated principles is not something to spar over. The model will produce a fluent argument either way, because text is what it makes, and the fluency will be persuasive. Reach for a coach, a partner, or a long walk for that kind of question instead.
A second flag, the MIT Sloan research on AI-supported decision-making found that the tool amplifies your existing decision style rather than overriding it. If your style at 11 p.m. on a Sunday is to confirm what you already feel, the model will perform that confirmation with great fluency. The sparring prompt earns its keep in the morning, after coffee, with the decision written out longhand on a single page first.
What should you do with the output once you have it?
Treat it as a hypothesis to test, not a verdict to accept. Pick the strongest argument the model produced and spend thirty minutes checking the example it cited. If the example holds up under research, the argument is real and you have to weigh it. If the example dissolves, the model was performing dissent rather than carrying it. Tobi Lütke at Shopify frames the posture as refusing to accept the first working solution.
Once you have decided, write the final decision in one sentence with the date, your confidence, and the two things you might be wrong about. Philip Tetlock’s forecasting research shows that writing predictions down before the outcome is known is the single habit that reliably improves judgement over time. In six months, when the result is in, that note tells you, specifically, which of your instincts are calibrated and which are not. That is the input you cannot get from any model.
If you would like to talk through where the sparring practice fits in your decision rhythm, and where it does not, book a conversation.



