Strategy work, with AI as the second brain in the room

A founder at a cafe window table with a laptop, a notebook of boxes and arrows, and an espresso, mid-thought as she reads what she has typed into a chat window
TL;DR

Strategy is the discipline of staying with a hard question long enough to see the deeper question underneath it. AI as a second brain in the room extends your stamina for that work, surfaces contradictions in your plan, and lets you stress-test alternative framings before you commit. The decisions are still yours. AI is the stamina, not the judgement.

Key takeaways

- Strategy work fails most often when the founder runs out of stamina before they reach the deeper question. AI as a patient second voice lets you go three or four iterations further before you tap out, which is usually where the real diagnosis lives. - The setup is mechanical, not magical: paste the working plan, paste the constraint set, paste your three biggest unknowns. Ask the model to surface contradictions and propose three alternative framings. Then pull on the most interesting thread. - Roger Martin's "playing to win" choice cascade and Richard Rumelt's "crux" both treat strategy as a small set of hard choices. AI is useful inside those choices for synthesis, perspective and devil's advocacy. It is not useful for making the choice itself. - Ethan Mollick's "Co-Intelligence" frame applies cleanly: treat the model as co-worker and coach, not as oracle. The cyborg or centaur pattern beats the self-automator every time on strategy work, because the founder still has to live with the decision. - The boundary is the post: AI is your stamina, not your judgement. The moment you let the model decide what to commit to, you have outsourced the only part of the job that cannot be outsourced.

She is in the same coffee shop she always uses for this. Same window seat. Same notebook with the same hand-drawn boxes and arrows, the same three question marks she has not been able to answer for a month. The next-year plan is on her laptop in front of her. She has just hit the wall she always hits at the second espresso, where the obvious moves run out and the harder question underneath them refuses to come into focus. This is the moment AI on your desk is genuinely useful for, and almost no one is using it for that yet.

What is “second brain” strategy work?

It is using a frontier AI model as a patient sparring partner while you wrestle with a real strategic question. The model holds your plan, your constraints and your unknowns in working memory while you do the thinking. It surfaces contradictions you have stopped seeing, proposes framings you would not reach alone, and lets you go three or four iterations further before you tap out.

The shorthand “second brain” comes from Tiago Forte’s work on personal knowledge systems. What is new in 2026 is that the second brain can now talk back. It can hold the documents you have already written about the business, read them in seconds, and use them as the substrate for a back-and-forth conversation about what the next year actually has to look like. That is a different category of tool from a notes app, and it is largely sitting unused on the desks of the founders who would benefit most.

Why does it matter for your business?

Strategy is mostly the discipline of staying with a hard question long enough to see the deeper question underneath it. Roger Martin’s choice cascade and Richard Rumelt’s “crux” both make the same point in different language: the work is a small set of hard choices, and the diagnosis of which choice actually matters is where plans fail. Founders run out of stamina before they reach the diagnosis.

The plan ends up being a list of activities rather than a position. Michael Porter’s 1996 Harvard Business Review essay made the same point about the previous generation of management tools: operational effectiveness is not strategy, and the gap between the two is where competitive position is won or lost. AI follows the same rule. Used for surface productivity, it makes you slightly faster at activities that may or may not have been worth doing. Used as a second brain on the strategic question, it lets you reach the diagnosis your competitors have given up on.

This shows up in the data. McKinsey’s 2025 State of AI survey reports that roughly 88 percent of organisations are using AI in some form, only around 1 percent describe their rollout as mature, and only about 6 percent report meaningful financial returns. The breadth is there. The depth is not.

Where will you actually meet it?

The setup is mechanical. Paste the working plan into the chat window. Paste the constraint set: cash, capacity, current commitments, the things that cannot move in the next twelve months. Paste your three biggest unknowns, written as questions you would actually ask out loud if you had a smarter peer in the room.

Then ask the model to do three things in sequence. Surface the contradictions in the plan. Propose three alternative framings of the strategic question. Flag the assumption likeliest to be wrong. The output of that first pass is rarely the answer. It is the prompt for the conversation that follows.

Now pull on a thread. Pick the alternative framing that makes you the most uncomfortable and ask the model to extend it three steps. What does the first quarter look like under that framing? What does the team have to be able to do that it cannot do today? What customer segment does it cost you, and is that cost survivable? You are mentally simulating the option, not committing to it. Simon Wardley’s mapping work is genuinely useful here as a second pass once a framing has earned a closer look, because it forces you to lay out the value chain components and ask which are evolving and which are stuck.

The pattern Ethan Mollick describes in “Co-Intelligence” applies cleanly. You are operating as what the BCG and MIT Sloan research calls a centaur or cyborg: structured back-and-forth, model as co-thinker, you holding the line on what gets committed. The self-automator pattern, where the founder asks the model to “write me a strategy”, consistently produces worse outcomes than working without the model at all.

When to ask versus when to ignore

Ask when the question is genuinely hard and you have already been around it twice without breakthrough. The second-brain pattern earns its place on the messy diagnostic work, the choice between two viable directions, the pricing call that has been open for six weeks. It is wasted on rhetorical questions you already know the answer to.

It is worse than wasted on questions you should be asking a customer or a board member instead of a model. AI gives you a confident-sounding response on any question you put to it, including the ones where the right move is to pick up the phone. A useful early discipline is to write the question down and ask whether you would learn more from one good customer conversation than from any amount of model output. If the answer is the customer, do that first.

Ignore the pattern entirely on the commitment itself. The failure mode is the founder who runs the second-brain conversation, gets to a clean three-option synthesis, and then asks the model “which one should I pick?”. Martin’s frame is the protective discipline: strategy is choice, and choice has to be made by the person who will be living inside the consequences for the next three years.

Daniel Kahneman’s work on noise gives the technical reason for the same boundary. A model can reduce variance in your own judgement by enforcing structured analysis. It cannot make the value-laden trade-off for you, and pretending it can is how good founders end up with strategies that read well and feel borrowed. The honest version of the rule is short. AI is your stamina, not your judgement. It lets you go further before you give up. The decisions are still yours.

The natural pairing for this post is pre-mortems with AI, which is the discipline of stress-testing the option you have chosen before you commit to it. The two reads work as a sequence. Second-brain to pick the option, pre-mortem to harden it.

The sibling Do-quadrant post on AI as sparring partner for hard decisions covers the same conversational pattern at the level of single-question decisions rather than full strategic plans. For the cluster-level frame, the AI for your own work pillar sets out why personal practice tends to move ahead of organisational deployment for owner-operators.

If you would like to talk through how this kind of strategy work would land in your own week, book a conversation.

Sources

- Martin, Roger (2010-2024). Strategic Choice Cascade and "Playing to Win" body of work, archived at rogerlmartin.com and summarised at thedecisionstack.com. Cited as the framework anchor for the "strategy is choice, not planning" frame in the body. https://www.thedecisionstack.com/roger-martin-is-right-strategy-is-broken-heres-how-to-fix-it/ - Rumelt, Richard (2022). "The Crux: How Leaders Become Strategists" and the McKinsey Quarterly interview "Strategy's strategist". Cited as the source for the "crux" diagnostic frame and for the distinction between bad strategy and real strategic work. https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/strategys-strategist-an-interview-with-richard-rumelt - Mollick, Ethan (2024). "Co-Intelligence: Living and Working with AI" (Penguin, 2024) and the One Useful Thing Substack. Cited as the source for the co-worker/coach framing and for the cyborg vs centaur vs self-automator distinction in human-AI collaboration. https://www.penguinrandomhouse.com/books/741805/co-intelligence-by-ethan-mollick/ - Wardley, Simon (2025). Wardley mapping resources at wardleymaps.com, including the strategy loop and the situational-awareness cycle. Cited as the practical mapping technique referenced in the "where will you actually meet it" section. https://www.wardleymaps.com/resources - Porter, Michael (1996). "What Is Strategy?" Harvard Business Review, November-December 1996. Cited as the foundational distinction between operational effectiveness and strategy proper, which underpins why "AI for productivity" is not the same as "AI for strategy". https://hbr.org/1996/11/what-is-strategy - Kahneman, Daniel, Sibony, Olivier and Sunstein, Cass (2021). "Noise: A Flaw in Human Judgment" (Little, Brown Spark). Cited as the evidence base for the noise problem in founder judgement and why structured workflows reduce variability in strategic choices. https://www.littlebrown.com/titles/daniel-kahneman/noise/9780316451390/ - Forte, Tiago (2022-2024). "Building a Second Brain" book and the PARA method at buildingasecondbrain.com. Cited as the lineage source for the "second brain" framing used in the post title and the organising principle behind a personal AI strategy stack. https://www.buildingasecondbrain.com/para - McKinsey (2025). "The state of AI" global survey. Cited for the finding that roughly 1 percent of organisations describe their AI rollout as mature and roughly 6 percent report meaningful financial returns, used in the body to anchor the "depth not breadth" point. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai - Lütke, Tobi at Shopify (2025). First Round Review write-up of Shopify's internal AI adoption playbook, including the "context engineering" framing for working effectively with frontier models. Cited as the practitioner anchor for the prompt-quality argument in the FAQ. https://www.firstround.com/ai/shopify

Frequently asked questions

Can AI actually do strategy, or is it just a glorified note-taker?

Neither. Treated well, AI is a sparring partner with infinite patience and a wide reading list. It surfaces contradictions in your plan, generates alternative framings you might not reach alone, and lets you mentally simulate the consequences of each option before you commit. The choice itself is still yours, and has to be. Roger Martin's point holds: strategy is choice, and choice cannot be delegated to a model that does not have to live with the result.

Which model should I use for this kind of work?

Any of the frontier reasoning models works for the second-brain pattern: Claude, ChatGPT, or Gemini in their current top-tier versions. The choice of model matters less than the quality of context you paste in. A weaker model with the working plan, the constraint set, and three real unknowns in front of it will outperform a stronger model with a vague prompt every time. Context engineering, in Tobi Lütke's phrase, is the variable that moves.

What if the AI just tells me what I want to hear?

Frontier models do flatter, especially on strategy work where the founder has obvious skin in the game. The fix is in the prompt: ask explicitly for the strongest case against your current plan, the assumption most likely to be wrong, and what a sceptical board member would push back on. If you only ask "is this a good plan", you will get sycophancy. If you ask "what are three ways this plan could fail in the first quarter", you get usable pressure.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation