A founder is reading a £12,000 roadmap proposal for the second time. The Statement of Work names three deliverables: an AI strategy workshop, a use case discovery exercise, and a prioritisation matrix. The proposal is signed by Tuesday. She does not yet know whether the deliverable list is generous, fair, or thin. There is nothing in her file to compare it against, and her last consultant left her a deck that has been sitting in a folder for fourteen months.
Most SME owners are buying their first or second roadmap. The market does not give them a benchmark to read the proposal against. So they sign, six weeks later they receive a document, and the question of whether the document is what they paid for is uncomfortable to ask. This piece is the benchmark.
What you should expect for £5,000 to £15,000
A roadmap engagement at this price tier should run six to eight weeks and produce five named outputs. Anything less is either a workshop dressed up as a roadmap or a roadmap that has been compressed to fit a budget. The five outputs are business process interviews with named stakeholders, a data readiness assessment, a prioritised use case list with quantified business value, a recommended sequence with budget tiers, and a risk register. Each one carries information the next stage of the work depends on.
If the proposal in front of you names fewer outputs, or names them more vaguely, that is a signal. The consultant either has not done many of these or is hoping you will not press for specifics. Either way, the proposal needs amendment before the contract is signed, not after.
Why business process interviews with named stakeholders are non-negotiable
A roadmap is only as good as the operational ground truth it sits on. That ground truth comes from one place: structured interviews with the people inside the business who actually run the processes the AI work might touch. Operations leads, senior client managers, the person who runs invoicing, the person who handles the most painful onboarding cases. These conversations cannot be replaced by a survey or a workshop, and they cannot be conducted by a junior consultant whose first day is also yours.
If the proposal lists “stakeholder interviews” without naming who, when, and for how long, the engagement is at risk of producing recommendations that look sensible on paper and fall apart on contact with the people who would have to live with them. Names and durations should be in the Statement of Work. Six interviews of forty-five minutes each is reasonable for a small business. Four hour-long interviews is reasonable for a more complex one. Two thirty-minute “alignment calls” is not.
Why data readiness sits at the centre of the roadmap
The single most-cited cause of AI project failure in the published research is data quality. The RAND meta-analysis found that 80.3 per cent of AI projects fail to deliver expected business value, and the three failure patterns it identifies are data quality issues, organisational maturity gaps, and use-case drift. A roadmap that does not assess data readiness is recommending use cases that may not be implementable when the time comes.
The data readiness assessment does not need to be exhaustive. It does need to surface the obvious: are the systems you would draw data from documented, are key fields complete, are there duplicate or inconsistent records that would degrade model accuracy, and is there a path to improving any of that inside the budget the roadmap implies. A consultant who skips this and goes straight to “what AI use cases sound exciting” is selling you a roadmap that may collapse at the implementation stage.
The scope-creep tell that should end the conversation
There is a documented failure pattern in roadmap engagements that is worth naming directly. The consultant takes a clear problem statement, like “improve sales forecast accuracy”, and over the course of the engagement repositions the work as an “AI platform evaluation” or a “data governance initiative”. Each scope expansion sounds defensible on its own. Collectively they widen a tight, deliverable-shaped engagement into a sprawling programme that produces a deck instead of a roadmap and quietly invoices another £10,000.
If this happens during a roadmap engagement, the same dynamic will play out twice as hard during a delivery engagement. Scope discipline at the roadmap stage is the single best predictor of scope discipline at delivery. A consultant who can hold a roadmap to its original shape, document scope changes formally, and re-estimate timeline and budget when scope shifts is the consultant you want for the next stage. A consultant who cannot is the one to step away from before more is committed.
What you should refuse to pay for
A roadmap is not a strategy workshop. A roadmap is not a vendor evaluation document where every recommendation funnels to the same platform. A roadmap is not a slide deck full of “AI opportunity” categories without quantified business value. A roadmap is not generic recommendations applicable to any business in your sector. If you are reading a draft that resembles any of those, the engagement has not produced what you bought.
The output should look specific to your business, your data, your processes, your people, and your priorities. It should be detailed enough that a competent delivery partner could pick it up and start work without re-running the discovery. It should give you a sequence and a price range, not a wish list.
If the roadmap you have just been delivered does not look like that, the next conversation is the one where you say so. If you are about to commission one and want to scope it tightly before signing, book a conversation.



