A founder of a 35-person consultancy sits at his desk on a Friday afternoon, looking at the prompt library his head of marketing built three months ago. Twelve prompts in a Notion page: discovery-call summary, proposal opener, supplier escalation draft, weekly standup notes. They are good prompts. He uses them.
The team does not. The latest weekly check showed seven of the team have opened the Notion page once. Two have used a prompt. The rest are doing the same work the same way they did before the AI tools arrived. The Slack channel he set up to share prompts is six weeks dead. He is staring at a problem he cannot quite name.
Why does the prompt library fail as an artefact?
The friction sequence kills it. To use the library, a team member finds it, locates the right template, customises it, copies to their AI tool, then edits the output. Five steps for a 15-minute saving. The maths does not work, so the team skips it. HBR 2026 reports a 15 to 20 percent adoption plateau in firms that build generic libraries without workflow integration.
The library sits there. The Slack reminders go unread. The founder concludes the team are not committed, when actually the team made a rational decision: the saving is not worth the friction.
Where do prompt patterns actually scale?
Repeatable, structured, rule-based founder work. Board paper drafting has a consistent shape: situation, financials, decisions required, risks, recommended action. Weekly KPI commentary requires interpreting the prior week’s metrics, flagging deviations, identifying priorities. Supplier escalation, customer complaint triage, hiring scorecards, performance review scaffolding, and decision-memo writing all follow similar patterns. These are the workloads where a structured prompt produces a usable first draft.
OpenAI’s 2025 enterprise data confirms the pattern: 85 percent of marketing and product users report faster campaign execution when they use structured prompts for specific repeating workflows.
The Lenny Rachitsky founder-memo example
Lenny Rachitsky’s published playbook on founder communication gives a worked template: context, options considered, choice made and why, what was learned, request for feedback. Each founder memo following this pattern manually takes about 45 minutes. With a prompt that includes the template, a few past memos from that founder as voice examples, and the decision data, the first draft takes 15 minutes. The founder spends 15 minutes refining tone and adding nuance.
The total time drops from 45 to 30 minutes per memo. For a founder writing two memos a week, that is roughly an hour a week back. For the team, the same template produces aligned communication when they write it themselves. The prompt is doing real work, in a real workflow, with measurable savings.
What moves adoption from 15 percent
Workflow integration. The single biggest lever. Superhuman and Shortwave both ship prompt libraries inside their email assistants. HubSpot Breeze ships templated AI inside the CRM. The prompt is one click from where the work happens. The team member is already in the tool, the prompt is already in the tool, the friction collapses.
This is what takes adoption from 15 percent to closer to 60 percent. Not a better prompt library; not a better Slack reminder. The location of the prompt in the workflow.
When to build a custom GPT instead
The lever here is depth of training rather than breadth of prompt library. A custom GPT trained on enough of the founder’s voice, decisions, and example outputs produces drafts that need 10 percent editing rather than 40 percent. Jill Wise’s “Billie” is the worked example: trained on a decade of client work, logic trees, SOPs, and outputs, it moves clients from raw thinking to a publishable narrative in one cycle.
Reforge’s AI Pivot course publishes prompts specific to CFO-level tasks: financial narrative storytelling, cash flow forecasting, scenario planning, QBR preparation. A founder using Reforge’s QBR structure against real financial data produces a first draft that is 70 to 80 percent publishable. Generic ChatGPT prompts on the same data produce 40 to 50 percent publishable. The difference is specificity. The Reforge prompt names tone, structure, the four financial pillars, what the board needs to hear, and includes example QBR narratives.
One well-trained custom GPT used heavily produces more leverage than twelve generic prompts in a Notion that nobody opens.
The four conditions for actual adoption
A prompt that turns into adoption needs four conditions. Workflow integration: the prompt sits inside the tool where the work happens. Regular calibration: someone reviews a sample of prompt outputs monthly to check the prompt is still doing what you want. Clear governance: the team knows when to use the prompt and when to skip it. Team training: someone walks through what the prompt is for and what good output looks like.
Without all four, adoption collapses inside three months. The library becomes “that thing Slack reminded us about.” The founder retains all the decision-making and time commitment because the team views the prompt as a suggestion, not a required step. The intended leverage never materialises.
What to do this week
Audit the prompts you have. Drop the ones nobody is using. For each remaining prompt, ask: where does the work it addresses actually happen? If the answer is email, move the prompt inside Superhuman, Shortwave, or the AI integration in Gmail or Outlook. If the answer is the CRM, move it inside HubSpot Breeze. Closer to the work, away from the Notion page.
If one or two prompts are doing the bulk of the founder’s work and the rest are noise, consider whether a custom GPT trained on the patterns is the right next move. Depth over breadth.
If you want a second pair of eyes on which patterns actually scale in your firm, book a conversation.



