Standing prompts, the personal prompt library that runs your routine work

A founder at a wooden desk with an open laptop and a paper notebook, writing by afternoon window light.
TL;DR

A personal prompt library is a markdown file of fifteen to thirty standing prompts that handle the founder's recurring asks: summary, briefing, draft response, decision frame, comparison. Each prompt has a verb-object-format name, lives somewhere you actually open, and gets one refinement per week with a monthly prune. Built this way, the library stops being a content collection and starts being infrastructure.

Key takeaways

- The highest-yield AI artefact for an owner-operator is fifteen to thirty standing prompts in a markdown file, not a fancier tool subscription. - Use a verb-object-format naming convention (`draft-client-update-markdown`, `analyse-pipeline-table`) so prompts are callable from memory rather than buried in a database nobody opens. - The 15 to 30 range maps to the cognitive limits Miller documented in 1956: small enough to recall, large enough to cover the recurring work of a services firm. - Maintenance is the discipline that separates a working library from a graveyard. One refinement per week, one prune per month, archived not deleted. - Stop reinventing the same prompt every Monday. Once a prompt earns its place in the library, the time it saves compounds across every reuse.

A founder I spoke with last month had typed a variation of the same prompt seventy-three times. “Summarise this client call into next steps and open questions, in the format I usually use.” Each time, slightly different wording. Each time, slightly different output. None of the seventy-three saved anywhere. She knew this was wasteful. She also knew she was going to do it again on Tuesday.

This is the single most common AI failure pattern I see among owner-operators. The complaint usually sounds like “I use AI all the time, but I rebuild the same prompt every week from scratch”, rather than “I do not know how to use AI”. The fix is the most boring artefact in the toolkit: a markdown file holding fifteen to thirty prompts, named, sorted, called by name. Boring sounds like a feature once you have lived with it for three months.

What is a standing prompt library?

A standing prompt library is a personal file of fifteen to thirty named prompts that cover your recurring asks: summary of a client call, draft of a weekly update, decision frame for a hire, comparison between two proposals. Each one is written once, refined over time, called by name. It is a working tool, like the formula bar in your spreadsheet or the saved searches in your inbox.

Anthropic’s engineering team calls the wider discipline context engineering, which they define as architecting the full information state an AI model needs to perform a task well. A personal prompt library is context engineering at the founder’s scale. The recurring tasks of a services firm are stable: a client onboarding pack, a project status update, a sales-call follow-up, a board paper, a weekly review. Each one has a shape that does not change much from week to week. Once you have written the prompt that produces a good version, there is no reason to write it again from memory. The friction of rebuilding it every time is what kills the daily AI habit. The library removes that friction.

Why does this matter for your business?

Because the alternative is the seventy-three-times pattern. A founder who reaches for AI on every recurring task but never saves the prompt pays the same setup cost every time: deciding what to ask, phrasing it, supplying context, iterating to a usable answer. That setup tax is invisible per task and substantial per month. A library moves the cost from per-use to one-off.

For a services firm in the £1m to £10m turnover band, where the founder’s attention is the binding constraint on growth, this matters more than another seat of another tool. A library does not need approval, procurement, or a vendor evaluation. You build it on a Sunday afternoon with the tools you already have, and within a fortnight you are saving an hour a week on writing tasks that used to take three. Wade Foster, the CEO of Zapier, has spoken on Lenny Rachitsky’s podcast about the operating discipline of treating context as a managed asset rather than a daily reinvention. Tobias Lütke, the founder of Shopify, has gone further, formalising what he calls context engineering for humans as a Shopify-wide practice. Both point at the same insight at different scales. The compounding sits in the artefact, not the tool.

What goes in the library, and how do you name it?

Five categories cover the typical owner-operator: client communication, internal operations, hiring and team, strategy and planning, and knowledge work. Two to five prompts per category, totalling 15 to 30. The distribution flexes to the firm. A consultancy weights toward client communication. A product business weights toward strategy and operations. The frequency-of-use test decides what earns a slot: three repeats in a month and the task wants a standing prompt.

The naming convention is where many founders falter. The discipline is verb-object-format. Verb names the action: draft, analyse, summarise, critique, compare, plan. Object names what is being acted on: client-call, proposal, dashboard, job-description. Format names the output shape: markdown, table, bullets, prose. So draft-client-update-markdown, summarise-call-bullets, compare-proposals-table, critique-offer-prose. The convention is load-bearing rather than decorative. Joshua Bloch, the architect of the Java Collections API, makes the case in his canonical talk on API design: names get written once and read a thousand times, so optimise for the reader. Your future self at 7am on a Monday is the reader. Make the name something a tired version of you can recall without thinking.

Where should the library actually live?

Wherever you already open every day, with a recall path under ten seconds. Notion is the common default: a database with columns for name, category, prompt text, last-used date, and version notes. Obsidian or a markdown folder in your editor works for founders who already live in plain text and want version control via Git. Text expanders like Espanso or Raycast Snippets give sub-second invocation by typing a short trigger.

The wrong answer is the familiar one: a Notion page buried three levels deep that you set up enthusiastically in January and have not opened since March. That is a prompt graveyard, dressed as infrastructure. The test is mechanical. Can you, right now, recall the name of three prompts in your library and call them in under thirty seconds? If not, the library does not exist as infrastructure yet, no matter how many prompts are saved in the file. Sibling posts in this cluster cover the related habits that share this rule, including the five workflows AI runs every Monday and briefing AI like a contractor, where the same standing-prompt discipline drives one-shot delegations.

How do you keep it alive?

The maintenance ritual is a thirty-minute slot once a week, on the same day. One prompt added or refined per week, one prune per month, archived rather than deleted. The cadence is deliberate. Weekly is frequent enough to compound, slow enough to be sustainable. Toyota Kata uses the same loop: identify the gap, define the target, take one step.

Adding a prompt is triggered by noticing you have done a similar task three times this week without a standing prompt for it. Refining is triggered by mediocre output from a prompt you used recently: read the output, ask what is missing, edit the prompt, save the new version. Pruning is the monthly pass: any prompt unused in eight weeks moves to an archive section, not deleted, recoverable later if circumstances change. This keeps the active library inside the 15 to 30 range, which keeps it recallable. Without the prune, libraries drift to fifty prompts of which twenty matter, and recall collapses. The pillar piece on this cluster, AI for your own work, not just your business, and the framework explainer, the EAD-Do framework, recast for AI, set the wider context: standing prompts sit firmly in the Automate quadrant, where you turn recurring asks into infrastructure rather than reinventing them every week.

If you want to start, do it small. Pick the three tasks you do most often that take more than twenty minutes each. Write a minimal prompt for each. Use them for a fortnight, refine based on what you noticed. Add two more. After twelve weeks of weekly maintenance you have a library of fifteen prompts, each one earned through actual use. That is the artefact. It is boring. It is also the highest-yield thing you will build with AI this year.

Sources

- Anthropic Engineering (2024). Effective context engineering for AI agents. Vendor research on why standing context outperforms ad hoc prompting as agentic systems mature. https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents - Miller, G. A. (1956). The magical number seven, plus or minus two. Foundational cognitive psychology paper on working-memory limits, anchors the 15 to 30 prompt range and the chunking principle behind library categorisation. https://en.wikipedia.org/wiki/The_Magical_Number_Seven,_Plus_or_Minus_Two - Lenny Rachitsky (2025). Zapier's CEO Wade Foster on his personal AI stack. Operator disclosure of how a founder running a public company structures personal AI use, including standing context patterns. https://www.lennysnewsletter.com/p/zapiers-ceo-shares-his-personal-ai-stack - First Round Review (2025). Tobias Lütke on context engineering for humans at Shopify. Founder-led account of treating structured context as infrastructure rather than a content artefact. https://www.firstround.com/ai/shopify - OpenAI Developer Documentation (2024). Prompt engineering guide. Vendor reference on iteration patterns and the case for refining prompts through use rather than designing them in one sitting. https://developers.openai.com/api/docs/guides/prompt-engineering - Bloch, J. (2007). How to design a good API and why it matters. InfoQ presentation on naming as load-bearing infrastructure, applies directly to prompt-name discipline. https://www.infoq.com/presentations/effective-api-design/ - The Lean Post (2023). Improve continuously by mastering the lean kata. Toyota Kata as a structured weekly improvement loop, the analogue for the prompt-refinement ritual. https://www.lean.org/the-lean-post/articles/improve-continuously-by-mastering-the-lean-kata/ - Information Commissioner's Office (2024). UK GDPR guidance on artificial intelligence. UK regulator guidance on processing personal data through AI tools, relevant when prompts touch client information. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/ - OWASP (2024). LLM01 Prompt Injection. Standards-body guidance on prompt-injection risk, relevant when standing prompts process untrusted input from clients or team members. https://genai.owasp.org/llmrisk/llm01-prompt-injection/ - Notion (2024). AI prompts database template. Vendor pattern for hosting a prompt library with category, status, and last-used fields built in. https://www.notion.com/templates/ai-prompts-database

Frequently asked questions

How many standing prompts should I actually have?

Aim for 15 to 30. Below 15 and the library is too sparse to cover a typical services founder's week. Above 30 and you stop remembering what is in there, which means you stop calling them by name and start typing fresh prompts again. The range maps to what cognitive psychology calls chunking: 5 to 7 categories, each holding 2 to 5 named prompts. Many founders settle around 18 to 22 once the library matures.

Where should I keep the library so I actually use it?

Anywhere you already open every day. A markdown file in Obsidian or your editor works for technically inclined founders. A Notion database with a "last used" column works for everyone else. A text expander like Espanso or TextExpander works if you want sub-second access. The wrong answer is a buried Notion page with no entry point. The library only earns its keep if you can call a prompt in under ten seconds.

What goes in the library and what does not?

Only the recurring asks. Summary of a client call, draft of a weekly update, decision frame for a hire, comparison of two proposals, briefing pack for a meeting. If you have done a similar task three times in a month, it earns a standing prompt. One-off creative or research prompts stay ad hoc. Quality over volume. A library of 20 sharp prompts beats 100 vague ones every time.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation