A founder I spoke with last month had typed a variation of the same prompt seventy-three times. “Summarise this client call into next steps and open questions, in the format I usually use.” Each time, slightly different wording. Each time, slightly different output. None of the seventy-three saved anywhere. She knew this was wasteful. She also knew she was going to do it again on Tuesday.
This is the single most common AI failure pattern I see among owner-operators. The complaint usually sounds like “I use AI all the time, but I rebuild the same prompt every week from scratch”, rather than “I do not know how to use AI”. The fix is the most boring artefact in the toolkit: a markdown file holding fifteen to thirty prompts, named, sorted, called by name. Boring sounds like a feature once you have lived with it for three months.
What is a standing prompt library?
A standing prompt library is a personal file of fifteen to thirty named prompts that cover your recurring asks: summary of a client call, draft of a weekly update, decision frame for a hire, comparison between two proposals. Each one is written once, refined over time, called by name. It is a working tool, like the formula bar in your spreadsheet or the saved searches in your inbox.
Anthropic’s engineering team calls the wider discipline context engineering, which they define as architecting the full information state an AI model needs to perform a task well. A personal prompt library is context engineering at the founder’s scale. The recurring tasks of a services firm are stable: a client onboarding pack, a project status update, a sales-call follow-up, a board paper, a weekly review. Each one has a shape that does not change much from week to week. Once you have written the prompt that produces a good version, there is no reason to write it again from memory. The friction of rebuilding it every time is what kills the daily AI habit. The library removes that friction.
Why does this matter for your business?
Because the alternative is the seventy-three-times pattern. A founder who reaches for AI on every recurring task but never saves the prompt pays the same setup cost every time: deciding what to ask, phrasing it, supplying context, iterating to a usable answer. That setup tax is invisible per task and substantial per month. A library moves the cost from per-use to one-off.
For a services firm in the £1m to £10m turnover band, where the founder’s attention is the binding constraint on growth, this matters more than another seat of another tool. A library does not need approval, procurement, or a vendor evaluation. You build it on a Sunday afternoon with the tools you already have, and within a fortnight you are saving an hour a week on writing tasks that used to take three. Wade Foster, the CEO of Zapier, has spoken on Lenny Rachitsky’s podcast about the operating discipline of treating context as a managed asset rather than a daily reinvention. Tobias Lütke, the founder of Shopify, has gone further, formalising what he calls context engineering for humans as a Shopify-wide practice. Both point at the same insight at different scales. The compounding sits in the artefact, not the tool.
What goes in the library, and how do you name it?
Five categories cover the typical owner-operator: client communication, internal operations, hiring and team, strategy and planning, and knowledge work. Two to five prompts per category, totalling 15 to 30. The distribution flexes to the firm. A consultancy weights toward client communication. A product business weights toward strategy and operations. The frequency-of-use test decides what earns a slot: three repeats in a month and the task wants a standing prompt.
The naming convention is where many founders falter. The discipline is verb-object-format. Verb names the action: draft, analyse, summarise, critique, compare, plan. Object names what is being acted on: client-call, proposal, dashboard, job-description. Format names the output shape: markdown, table, bullets, prose. So draft-client-update-markdown, summarise-call-bullets, compare-proposals-table, critique-offer-prose. The convention is load-bearing rather than decorative. Joshua Bloch, the architect of the Java Collections API, makes the case in his canonical talk on API design: names get written once and read a thousand times, so optimise for the reader. Your future self at 7am on a Monday is the reader. Make the name something a tired version of you can recall without thinking.
Where should the library actually live?
Wherever you already open every day, with a recall path under ten seconds. Notion is the common default: a database with columns for name, category, prompt text, last-used date, and version notes. Obsidian or a markdown folder in your editor works for founders who already live in plain text and want version control via Git. Text expanders like Espanso or Raycast Snippets give sub-second invocation by typing a short trigger.
The wrong answer is the familiar one: a Notion page buried three levels deep that you set up enthusiastically in January and have not opened since March. That is a prompt graveyard, dressed as infrastructure. The test is mechanical. Can you, right now, recall the name of three prompts in your library and call them in under thirty seconds? If not, the library does not exist as infrastructure yet, no matter how many prompts are saved in the file. Sibling posts in this cluster cover the related habits that share this rule, including the five workflows AI runs every Monday and briefing AI like a contractor, where the same standing-prompt discipline drives one-shot delegations.
How do you keep it alive?
The maintenance ritual is a thirty-minute slot once a week, on the same day. One prompt added or refined per week, one prune per month, archived rather than deleted. The cadence is deliberate. Weekly is frequent enough to compound, slow enough to be sustainable. Toyota Kata uses the same loop: identify the gap, define the target, take one step.
Adding a prompt is triggered by noticing you have done a similar task three times this week without a standing prompt for it. Refining is triggered by mediocre output from a prompt you used recently: read the output, ask what is missing, edit the prompt, save the new version. Pruning is the monthly pass: any prompt unused in eight weeks moves to an archive section, not deleted, recoverable later if circumstances change. This keeps the active library inside the 15 to 30 range, which keeps it recallable. Without the prune, libraries drift to fifty prompts of which twenty matter, and recall collapses. The pillar piece on this cluster, AI for your own work, not just your business, and the framework explainer, the EAD-Do framework, recast for AI, set the wider context: standing prompts sit firmly in the Automate quadrant, where you turn recurring asks into infrastructure rather than reinventing them every week.
If you want to start, do it small. Pick the three tasks you do most often that take more than twenty minutes each. Write a minimal prompt for each. Use them for a fortnight, refine based on what you noticed. Add two more. After twelve weeks of weekly maintenance you have a library of fifteen prompts, each one earned through actual use. That is the artefact. It is boring. It is also the highest-yield thing you will build with AI this year.



