Why your prompt library is sitting in a Notion that nobody opens

A founder at his desk on a Friday afternoon, looking at a Notion page on his laptop, an unopened notebook nearby, a mug of coffee on the desk
TL;DR

The prompt library in a shared Notion is one of the most-attempted and least-effective AI moves in SMEs. Adoption stalls at 15 to 20 percent because the friction sequence (find, locate, customise, run, edit) kills the maths for the team. What works instead is workflow embedding (prompts inside the email tool, the CRM, the proposal tool) plus custom-trained GPTs deep enough to drop the edit ratio from 40 percent to 10 percent.

Key takeaways

- HBR research shows organisations that build generic prompt libraries without workflow integration plateau at 15 to 20 percent adoption. The friction sequence makes the time saving disappear for the team member. - Categories where prompt patterns scale: board paper drafting, weekly KPI commentary, supplier escalation drafting, customer complaint triage, hiring scorecards, performance review scaffolding, decision-memo writing. - Lenny Rachitsky's founder communication template (context, options, choice, learning, request) compresses a 45-minute draft to a 15-minute draft plus 15 minutes of refinement. - Custom GPTs trained on a decade of voice, decisions, SOPs, and example outputs (Jill Wise's "Billie") drop the edit ratio from 40 percent to 10 percent. The lever is depth of training, not breadth of prompt library. - The four conditions that turn a prompt into adoption: workflow integration, monthly calibration against real outputs, clear governance, team training. Without all four, adoption collapses inside three months.

A founder of a 35-person consultancy sits at his desk on a Friday afternoon, looking at the prompt library his head of marketing built three months ago. Twelve prompts in a Notion page: discovery-call summary, proposal opener, supplier escalation draft, weekly standup notes. They are good prompts. He uses them.

The team does not. The latest weekly check showed seven of the team have opened the Notion page once. Two have used a prompt. The rest are doing the same work the same way they did before the AI tools arrived. The Slack channel he set up to share prompts is six weeks dead. He is staring at a problem he cannot quite name.

Why does the prompt library fail as an artefact?

The friction sequence kills it. To use the library, a team member finds it, locates the right template, customises it, copies to their AI tool, then edits the output. Five steps for a 15-minute saving. The maths does not work, so the team skips it. HBR 2026 reports a 15 to 20 percent adoption plateau in firms that build generic libraries without workflow integration.

The library sits there. The Slack reminders go unread. The founder concludes the team are not committed, when actually the team made a rational decision: the saving is not worth the friction.

Where do prompt patterns actually scale?

Repeatable, structured, rule-based founder work. Board paper drafting has a consistent shape: situation, financials, decisions required, risks, recommended action. Weekly KPI commentary requires interpreting the prior week’s metrics, flagging deviations, identifying priorities. Supplier escalation, customer complaint triage, hiring scorecards, performance review scaffolding, and decision-memo writing all follow similar patterns. These are the workloads where a structured prompt produces a usable first draft.

OpenAI’s 2025 enterprise data confirms the pattern: 85 percent of marketing and product users report faster campaign execution when they use structured prompts for specific repeating workflows.

The Lenny Rachitsky founder-memo example

Lenny Rachitsky’s published playbook on founder communication gives a worked template: context, options considered, choice made and why, what was learned, request for feedback. Each founder memo following this pattern manually takes about 45 minutes. With a prompt that includes the template, a few past memos from that founder as voice examples, and the decision data, the first draft takes 15 minutes. The founder spends 15 minutes refining tone and adding nuance.

The total time drops from 45 to 30 minutes per memo. For a founder writing two memos a week, that is roughly an hour a week back. For the team, the same template produces aligned communication when they write it themselves. The prompt is doing real work, in a real workflow, with measurable savings.

What moves adoption from 15 percent

Workflow integration. The single biggest lever. Superhuman and Shortwave both ship prompt libraries inside their email assistants. HubSpot Breeze ships templated AI inside the CRM. The prompt is one click from where the work happens. The team member is already in the tool, the prompt is already in the tool, the friction collapses.

This is what takes adoption from 15 percent to closer to 60 percent. Not a better prompt library; not a better Slack reminder. The location of the prompt in the workflow.

When to build a custom GPT instead

The lever here is depth of training rather than breadth of prompt library. A custom GPT trained on enough of the founder’s voice, decisions, and example outputs produces drafts that need 10 percent editing rather than 40 percent. Jill Wise’s “Billie” is the worked example: trained on a decade of client work, logic trees, SOPs, and outputs, it moves clients from raw thinking to a publishable narrative in one cycle.

Reforge’s AI Pivot course publishes prompts specific to CFO-level tasks: financial narrative storytelling, cash flow forecasting, scenario planning, QBR preparation. A founder using Reforge’s QBR structure against real financial data produces a first draft that is 70 to 80 percent publishable. Generic ChatGPT prompts on the same data produce 40 to 50 percent publishable. The difference is specificity. The Reforge prompt names tone, structure, the four financial pillars, what the board needs to hear, and includes example QBR narratives.

One well-trained custom GPT used heavily produces more leverage than twelve generic prompts in a Notion that nobody opens.

The four conditions for actual adoption

A prompt that turns into adoption needs four conditions. Workflow integration: the prompt sits inside the tool where the work happens. Regular calibration: someone reviews a sample of prompt outputs monthly to check the prompt is still doing what you want. Clear governance: the team knows when to use the prompt and when to skip it. Team training: someone walks through what the prompt is for and what good output looks like.

Without all four, adoption collapses inside three months. The library becomes “that thing Slack reminded us about.” The founder retains all the decision-making and time commitment because the team views the prompt as a suggestion, not a required step. The intended leverage never materialises.

What to do this week

Audit the prompts you have. Drop the ones nobody is using. For each remaining prompt, ask: where does the work it addresses actually happen? If the answer is email, move the prompt inside Superhuman, Shortwave, or the AI integration in Gmail or Outlook. If the answer is the CRM, move it inside HubSpot Breeze. Closer to the work, away from the Notion page.

If one or two prompts are doing the bulk of the founder’s work and the rest are noise, consider whether a custom GPT trained on the patterns is the right next move. Depth over breadth.

If you want a second pair of eyes on which patterns actually scale in your firm, book a conversation.

Sources

  • HBR research on prompt library adoption plateau (15 to 20 percent without workflow integration). Source.
  • Lenny Rachitsky founder communication playbook. Source.
  • Jill Wise on custom GPTs versus generic ChatGPT (10 percent versus 40 percent edit ratio). Source.
  • Reforge AI Pivot course and QBR prompt structure. Source.
  • Superhuman email assistant. Source.
  • Shortwave AI assistant. Source.
  • HubSpot Breeze AI for CRM and email. Source.
  • OpenAI 2025 enterprise data on adoption patterns at work. Source.
  • McKinsey & Company (2024). From Promise to Impact, How Companies Can Measure and Realise the Full Value of AI. Five-layer measurement framework for AI productivity vs leverage. Source.
  • Brynjolfsson, E., Li, D. and Raymond, L. (2023). Generative AI at Work, NBER Working Paper 31161. The 14 per cent average productivity gain and heterogeneity finding underpinning AI-as-leverage claims. Source.
  • Boston Consulting Group (2025). Are You Generating Value from AI, The Widening Gap. Future-built firms capture five times the revenue gains and three times the cost reductions of peers. Source.
  • MIT CISR (Woerner, Sebastian, Weill and Kaganer, 2025). Grow Enterprise AI Maturity for Bottom-Line Impact. Stage 3 enterprises achieve growth 11.3 percentage points above industry average. Source.

Frequently asked questions

Why does the team not use the prompt library I built?

The friction sequence kills it. Find the library, locate the relevant template, interpret how to customise it, copy to a tool, edit the output. Five steps for a 15-minute saving. The maths does not work, so people skip it. HBR research from 2026 reports adoption plateaus at 15 to 20 percent in firms that build generic libraries without workflow integration.

Where do prompt patterns actually scale?

Repeatable, structured, rule-based founder work: board paper drafting, weekly KPI commentary, supplier escalation, customer complaint triage, hiring scorecards, performance review scaffolding, decision-memo writing. These are the workloads where a good prompt earns its place. Lenny Rachitsky's founder communication template is the strongest published example: context, options considered, choice made and why, what was learned, request for feedback.

What moves adoption from 15 percent to something the team actually uses?

Workflow integration. Embed the prompt directly into the tool the team already uses, not a separate document. Superhuman and Shortwave ship prompt libraries inside their email assistants. HubSpot Breeze ships templated AI inside the CRM. The prompt is one click from where the work happens, not five clicks across three tools. That is what takes adoption from 15 percent to 60 percent.

When should I build a custom GPT instead of a prompt template?

When you need depth, not breadth. Jill Wise's 'Billie' is trained on a decade of client work, logic trees, SOPs, and example outputs. Edit ratio drops from 40 percent for generic GPTs to 10 percent for the custom one. Reforge's QBR prompt against real financial data produces 70 to 80 percent publishable first drafts versus 40 to 50 percent for generic prompts. The lever is depth of training.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation