She is in the same coffee shop she always uses for this. Same window seat. Same notebook with the same hand-drawn boxes and arrows, the same three question marks she has not been able to answer for a month. The next-year plan is on her laptop in front of her. She has just hit the wall she always hits at the second espresso, where the obvious moves run out and the harder question underneath them refuses to come into focus. This is the moment AI on your desk is genuinely useful for, and almost no one is using it for that yet.
What is “second brain” strategy work?
It is using a frontier AI model as a patient sparring partner while you wrestle with a real strategic question. The model holds your plan, your constraints and your unknowns in working memory while you do the thinking. It surfaces contradictions you have stopped seeing, proposes framings you would not reach alone, and lets you go three or four iterations further before you tap out.
The shorthand “second brain” comes from Tiago Forte’s work on personal knowledge systems. What is new in 2026 is that the second brain can now talk back. It can hold the documents you have already written about the business, read them in seconds, and use them as the substrate for a back-and-forth conversation about what the next year actually has to look like. That is a different category of tool from a notes app, and it is largely sitting unused on the desks of the founders who would benefit most.
Why does it matter for your business?
Strategy is mostly the discipline of staying with a hard question long enough to see the deeper question underneath it. Roger Martin’s choice cascade and Richard Rumelt’s “crux” both make the same point in different language: the work is a small set of hard choices, and the diagnosis of which choice actually matters is where plans fail. Founders run out of stamina before they reach the diagnosis.
The plan ends up being a list of activities rather than a position. Michael Porter’s 1996 Harvard Business Review essay made the same point about the previous generation of management tools: operational effectiveness is not strategy, and the gap between the two is where competitive position is won or lost. AI follows the same rule. Used for surface productivity, it makes you slightly faster at activities that may or may not have been worth doing. Used as a second brain on the strategic question, it lets you reach the diagnosis your competitors have given up on.
This shows up in the data. McKinsey’s 2025 State of AI survey reports that roughly 88 percent of organisations are using AI in some form, only around 1 percent describe their rollout as mature, and only about 6 percent report meaningful financial returns. The breadth is there. The depth is not.
Where will you actually meet it?
The setup is mechanical. Paste the working plan into the chat window. Paste the constraint set: cash, capacity, current commitments, the things that cannot move in the next twelve months. Paste your three biggest unknowns, written as questions you would actually ask out loud if you had a smarter peer in the room.
Then ask the model to do three things in sequence. Surface the contradictions in the plan. Propose three alternative framings of the strategic question. Flag the assumption likeliest to be wrong. The output of that first pass is rarely the answer. It is the prompt for the conversation that follows.
Now pull on a thread. Pick the alternative framing that makes you the most uncomfortable and ask the model to extend it three steps. What does the first quarter look like under that framing? What does the team have to be able to do that it cannot do today? What customer segment does it cost you, and is that cost survivable? You are mentally simulating the option, not committing to it. Simon Wardley’s mapping work is genuinely useful here as a second pass once a framing has earned a closer look, because it forces you to lay out the value chain components and ask which are evolving and which are stuck.
The pattern Ethan Mollick describes in “Co-Intelligence” applies cleanly. You are operating as what the BCG and MIT Sloan research calls a centaur or cyborg: structured back-and-forth, model as co-thinker, you holding the line on what gets committed. The self-automator pattern, where the founder asks the model to “write me a strategy”, consistently produces worse outcomes than working without the model at all.
When to ask versus when to ignore
Ask when the question is genuinely hard and you have already been around it twice without breakthrough. The second-brain pattern earns its place on the messy diagnostic work, the choice between two viable directions, the pricing call that has been open for six weeks. It is wasted on rhetorical questions you already know the answer to.
It is worse than wasted on questions you should be asking a customer or a board member instead of a model. AI gives you a confident-sounding response on any question you put to it, including the ones where the right move is to pick up the phone. A useful early discipline is to write the question down and ask whether you would learn more from one good customer conversation than from any amount of model output. If the answer is the customer, do that first.
Ignore the pattern entirely on the commitment itself. The failure mode is the founder who runs the second-brain conversation, gets to a clean three-option synthesis, and then asks the model “which one should I pick?”. Martin’s frame is the protective discipline: strategy is choice, and choice has to be made by the person who will be living inside the consequences for the next three years.
Daniel Kahneman’s work on noise gives the technical reason for the same boundary. A model can reduce variance in your own judgement by enforcing structured analysis. It cannot make the value-laden trade-off for you, and pretending it can is how good founders end up with strategies that read well and feel borrowed. The honest version of the rule is short. AI is your stamina, not your judgement. It lets you go further before you give up. The decisions are still yours.
Related concepts on this site
The natural pairing for this post is pre-mortems with AI, which is the discipline of stress-testing the option you have chosen before you commit to it. The two reads work as a sequence. Second-brain to pick the option, pre-mortem to harden it.
The sibling Do-quadrant post on AI as sparring partner for hard decisions covers the same conversational pattern at the level of single-question decisions rather than full strategic plans. For the cluster-level frame, the AI for your own work pillar sets out why personal practice tends to move ahead of organisational deployment for owner-operators.
If you would like to talk through how this kind of strategy work would land in your own week, book a conversation.



