A 12-person consulting firm deployed Cobl last quarter and watched proposal time drop from six hours to two. They expected the win rate to climb. It stayed flat at 65 percent. What changed: deals closed two days earlier on average, the sales lead's calendar opened up by twelve hours a month, and that twelve hours got reinvested into qualification calls. By month four the firm was running 30 percent more qualified deals through the same pipeline, and the win rate finally moved by two percentage points. The AI did not lift it. The redeployed time did.
This is the proposal AI pattern most owners do not see in the vendor pitch. The pitch implies AI writes better proposals and wins more deals. The data shows AI writes faster proposals and wins about the same. The real ROI sits one step removed from the proposal itself, and owners measuring the obvious metric miss it.
What does the time saving actually look like?
Manual proposal: 4 to 8 hours of senior practitioner time. Gathering client information, understanding requirements, drafting the proposal, reviewing, and editing. AI-assisted with tools like Cobl: 15 to 30 minutes for a first draft, plus 1 to 2 hours of customisation by a sales or senior practitioner to make the proposal client-ready. Net per-proposal time of 1.5 to 2.5 hours, a 60 to 70 percent reduction.
For firms with 10 to 15 proposals a month, this is 40 to 60 hours a month of senior staff time recovered. At £60 per hour for sales staff, that is £2,400 to £3,600 a month, £28,800 to £43,200 a year. Tool costs £100 to £300 a month. Net annual benefit £25,200 to £42,000. Payback in 1 to 2 weeks.
The Open case (software firm using Cobl) shows the pattern at scale. RFP response time dropped 50 percent after deployment. Engineers spent their time contextualising AI-generated proposals instead of starting from scratch. Proposal turnaround dropped from 5 to 7 days to 2 to 3 days.
Why does the win rate not move?
Because AI does not generate better proposals. It generates faster proposals from the same template logic the firm was already using. Available case studies show win rates flat or marginally improved (1 to 3 percentage points) when the proposals are customised by senior sales after the AI first draft. Proposals sent without customisation lose deals.
The vendor narrative implies the AI itself is the conversion-rate lift. The data shows the lift sits in human customisation that the AI freed up time for. If the firm cuts the senior-review step to maximise the time saving, the win rate drops. If the firm keeps the review step and uses the saved time for qualification, the win rate slowly rises.
The honest measurement is not "win rate after deployment" versus "win rate before deployment." It is "time per proposal, deal-close cycle, and qualification capacity." All three of those move with proposal AI. The win rate is downstream of all of them and moves on a different timescale.
What is the indirect win-rate effect?
Faster proposal turnaround creates earlier client engagement. A proposal landing on Friday instead of Tuesday gets discussed at the client's Monday meeting instead of the following Monday. Deals close 5 to 7 days earlier on average. For a £5m revenue firm with 20 to 30 active deals, that is roughly £100,000 to £150,000 in working capital relief from cash flowing in earlier rather than from new deals being won.
The freed sales-leader time is the longer-tail effect. Twelve hours a month redeployed from drafting to qualification calls or competitive positioning sessions changes which deals enter the proposal stage at all. Better-qualified deals win at higher rates. After three to four months of redeployed time, qualification quality improves and the win rate slowly moves.
Owners measuring the wrong thing miss this. They see the win rate flat at month two and conclude the tool is overhyped. The win-rate move is a six-month effect downstream of a daily time-saving effect. The metric to track first is qualification capacity, not win rate.
What is the prereq trap?
Firms that have not documented their service offerings, pricing models, and team bios cannot deploy proposal AI. The tool has nothing to work from. AI without templates and pricing structures produces generic, low-quality first drafts that take more time to fix than to write from scratch.
The prereq work is 8 to 16 hours of sales and delivery leadership time. Listing all service types the firm offers and standard pricing for each. Documenting 3 to 5 standard proposal sections that can be templated (firm overview, team bios, project approach, timeline, pricing). Capturing client-specific information requirements (industry context, client size, specific challenges addressed).
Owners who skip this and buy the tool first see poor results in the first three proposals, blame the tool, and either spend the prereq time retroactively or abandon the deployment. Owners who do the prereq first see immediate value from proposal one.
Which tools fit a small sales team?
For 5 to 10 proposals a month, Cobl or Proposify at £50 to £200 per user per month is cost-effective if services and pricing are documented. The case study figures (12-person consulting firm, 6-week deployment, £150 a month, net annual benefit £9,900 to £15,780) sit at the lower end of the SME band.
CRM-native proposal generation (Salesforce, HubSpot) at £100 to £300 per user per month is a fit if the firm is already running the CRM at higher tiers. For firms not yet on a paid CRM, the bundling does not justify the upgrade.
For under 5 proposals a month, ChatGPT or Claude at £20 to £30 per user per month with disciplined senior sales review is often adequate. The first draft is generic but the customisation effort matches what a manual draft would require, and the firm avoids a monthly subscription it would not use enough.
Where do compliance and oversight sit?
Sales proposal automation does not typically trigger primary regulatory requirements, but several governance considerations matter. Proposals containing pricing or terms subject to FCA Consumer Duty must be reviewed by a compliant person before sending. Proposals containing claims about qualifications or experience in regulated sectors (financial advice, healthcare) must be accurate and verifiable. AI can hallucinate credentials or overstate capabilities; review is required before sending.
For professional services firms subject to SRA, ICAEW, or other body rules, proposals must comply with professional standards on accuracy and non-misleading marketing. AI proposals must be reviewed against these standards. The cost of compliance is small (15 to 30 minutes per proposal of senior review) and is part of the time-saving math.
If you are deploying proposal AI and trying to work out which metric to measure, the win rate is the wrong one. The right ones are deal velocity, qualification capacity, and senior-time redeployment. Book a conversation.



