Picture a founder I’ll call Lucy. Looking at a consulting proposal that says “expected ROI 2x in 12 months.” The number sits in a clean blue box on page seven. There is no range. No confidence interval. No named risk. The consulting firm has done good work for peers in the same sector. Lucy also knows, from experience, that single-number ROI claims rarely survive contact with reality. The decision is whether to ask the awkward question now or sign and find out at month nine.
The gap between proposal-stage AI ROI claims and year-end reality is one of the most reliably damaging patterns in SME AI consulting. The pattern is structural; it is rarely about consultant dishonesty. Naming the structure makes it easier to handle the conversation at proposal stage rather than living with the gap at year-end.
What is the structural pattern that produces inflated proposal claims?
Three causes sit underneath the gap, in descending order of frequency. The first is aspirational overstatement. The consultant presents best-case scenarios as expected outcomes, rationalised on the grounds that “the client would not hire us with a pessimistic projection.” Most consultants doing this are not lying; they are presenting what they hope will happen and treating it as the central case. The HBR and management consulting ethics literature has documented this pattern for decades.
The second is anchoring bias. The high number gets stated first and reframes everything that follows. Once 2x has been spoken in the room, 1.4x feels disappointing even if 1.4x is genuinely the typical achievement for comparable deployments. The anchor sets the reference point and the actual outcome is judged against the anchor rather than against an independent standard.
The third is outright deception. The rarest form, but real. A consultant who knows the figures are inflated and presents them anyway. This is at the high-pressure end of the SME consulting market and is much less common than the first two. Most ROI overstatement falls into aspirational and anchoring categories rather than active deception.
The effect on the client is similar regardless of which category the overstatement sits in. Disappointment at year-end. Loss of trust. Reluctance to commit to expansion of the AI investment. Damage that compounds across the consulting relationship.
What does the data say about proposal-vs-reality gaps?
The Standish Group CHAOS Report 2024-2025, which tracks technology project outcomes across thousands of projects globally, reports that approximately 35 percent of technology projects deliver expected benefits, 50 percent deliver substantially less, and 15 percent are essentially failures. The median overrun on projected benefits is approximately 30 percent. Firms expecting 2x ROI typically achieve 1.4x in the originally-stated timeframe.
This is not a recent finding. The CHAOS data has shown a similar distribution for years across software and technology projects. AI projects, though less extensively studied, appear to follow similar patterns or worse, likely because AI is newer, less well understood, and involves greater change management complexity than the average technology rollout.
The MIT NANDA failure analysis converges on the same place. Roughly 60 to 70 percent of technology projects fail to deliver expected returns, where “failure” means delivering less than 70 percent of projection. The distribution is heavily skewed: a minority of projects deliver exceptional returns, a plurality deliver moderate returns of 1.2x to 1.5x, and a substantial minority deliver poor returns of 0.5x to 1x.
What this means for an SME reading a proposal: the gap is not a sign that the consultant is bad. It is the expected outcome given how proposals are typically constructed. The defensive move is to interrogate the proposal at proposal stage, not to be disappointed at year-end.
What does a defensible proposal-stage claim actually look like?
A defensible claim has three components, drawn from how risk-aware analysts present any forecast. The first is a range rather than a point estimate. The second is a confidence level associated with the range. The third is named risks that could push the outcome toward the lower end. Together, these three components turn an aspirational figure into something the buyer can actually plan against.
A worked example. “Based on comparable deployments and your stated use cases, we expect ROI of 1.5x to 2.5x within 12 months, with 70 percent confidence that ROI will exceed 1.5x. The primary risks are adoption rate (if adoption falls below 60 percent of target users, ROI will be 1.2x to 1.5x) and behaviour change (if users redirect freed-up time to non-billable work, realised ROI will be roughly 20 percent lower than measured productivity impact).”
This structure is honest because it makes uncertainty explicit. The buyer can read the range and calibrate expectations realistically. The buyer can read the named risks and identify which are most critical to manage during deployment. The buyer can plan for the lower end of the range as well as the upper end, which is what a serious financial decision requires.
The consultant who provides a range with confidence intervals is more credible than the consultant who provides a single point estimate. The range is harder to construct. It requires the consultant to have actually done the analysis carefully across their customer base, not just picked the figure that wins the engagement.
What about outcome-based pricing?
Some consulting practices are moving toward outcome-based pricing for AI engagements, where the consultant’s fee is adjusted based on actual results measured. This alignment of incentives is structural. It makes inflated proposal claims directly costly to the consultant; the consultant earns less if the actual ROI does not materialise. Where outcome-based pricing is applied genuinely, the proposal-stage claim tends to be much closer to the actual delivered ROI.
The honest reality for SMEs in 2026: outcome-based pricing is not yet standard in the AI consulting market. Most engagements remain fixed-fee or time-and-materials, where the consultant’s revenue is decoupled from actual results. The structural fix is in slow motion; it will not arrive in time to protect the firm signing a proposal next quarter.
The discipline therefore has to come from the buyer for now. The buyer-side question worth asking explicitly: is your revenue dependent on the firm seeing real ROI, or is it decoupled? A consultant who answers honestly that their fee is fixed regardless of outcome has told the firm something useful about how to read the rest of the proposal.
What is the right question to ask at proposal time?
The right question to bring to a proposal-stage conversation has three parts. What is the realistic range for ROI? What would have to be true for us to land below the lower bound? And what is your stake, as the consultant, in the answer? Each part flushes out something specific: range-building discipline, risk awareness, and incentive alignment.
The first part flushes out whether the consultant can articulate a range rather than a point estimate. The second flushes out whether the consultant has thought about risks and failure modes. The third flushes out whether the consultant’s incentives are aligned with the firm’s actual outcome.
If the consultant can answer all three, the proposal is grounded. If they cannot, the proposal is selling the headline number without the analysis behind it. Either is informative. The firm signs with eyes open or walks away.
For Lucy at the proposal-reading moment, the action is concrete. Email the consulting firm and ask the three questions. If the answers come back clean, the engagement is worth pursuing. If they come back vague, the firm has identified a problem before signing rather than after.
If you are reading an AI consulting proposal and want to think through the right questions to ask before signing, book a conversation.



