After the failed AI engagement, what's actually next

A founder at a desk in a quiet study, holding a printed slide deck folder, a notebook with a pen resting on a fresh page beside it
TL;DR

A meaningful share of small business AI buyers are second-time buyers, having lost $10,000 to $30,000 on a first engagement whose recommendations are now unused. Most failures are engagement-design failures, not consultant failures. The next move starts with a four-question diagnostic on the previous engagement, applied honestly, including to the next consultant under consideration.

Key takeaways

- A real share of AI buyers are second-time buyers. The twohundred.ai practitioner guide references SMEs commonly losing $10,000 to $30,000 on the first engagement before learning to buy differently. - Most failed AI engagements aren't consultant failures, they're engagement-design failures: wrong scope, no measurable success criteria, premature tool selection, and no internal accountability for adoption. - Run a four-question diagnostic on the previous engagement before deciding what's next. What problem was it supposed to solve? Was a measurable outcome agreed in writing? Did discovery come before tools? Who in the business owned adoption? - When evaluating the next consultant, the practitioner filter is sharp: "Ask what specific number they will move for you. If they cannot name one in 30 seconds, end the call." Lived business experience, sector knowledge, and case-study specificity matter more than credentials. - The next engagement does not have to look like the last one. A much smaller, more sharply scoped piece of work often outperforms a comprehensive engagement, because it forces the success criteria to be real.

The owner of a thirty-person services firm hired an AI consultant about fourteen months ago. The engagement ran for six weeks. There were two workshops, a discovery interview round, and a twenty-eight slide deck at the end with a clear set of recommendations. Two specific tools were licensed off the back of it. The pilot was scoped, the team was briefed, and the firm started.

Today the deck is in a folder she has not opened since November. The pilot was never finished. One of the tool subscriptions is still being charged monthly. She has not told her board the engagement didn’t land. She has not told her partner about the size of the spend. She has been quietly carrying the private conclusion that the consultant was wrong, or that AI just does not really apply to her firm, or both. Most of the AI consulting content she now reads addresses first-time buyers, and she finds herself reading less of it.

This is the second-time buyer who is not in any of the marketing personas. The data implies the cohort is large; practitioner accounts state it directly. This piece is for that reader, and the route through is sharper than the public conversation makes it sound.

How big is this cohort, actually?

No public survey isolates “second-time SME AI buyers” as a category, but practitioner guides and survey-side patterns triangulate the size. The twohundred.ai 2025 practitioner guide references the figure most directly: SMEs commonly losing “ten to thirty thousand on the wrong engagement first” before learning how to buy differently. Effica’s 2025 piece on AI implementation failure describes the same arc in narrative form, ending with the line about “a line item in the budget for software nobody touches”.

The Goldman Sachs 2026 survey finds 14 percent of small businesses with AI fully integrated, against 93 percent reporting positive impact among the integrated cohort. The gap between the two figures is largely populated by founders who have started, run into one of the failure modes, and stopped. A meaningful share of those founders started by hiring someone, and the engagement is part of what stalled.

The pattern is more common than founders think and rarely talked about, because nobody volunteers a story that ends in a deck nobody opens. That silence makes each individual founder feel like the exception. She is not the exception; she is the median.

Was the consultant the problem, usually?

Sometimes, but not as often as it feels. Most failed AI engagements are engagement-design failures. The wrong problem was scoped. No measurable outcome was agreed in writing. Discovery was rushed and tool selection happened before the business questions had been answered. Nobody inside the business was named as accountable for adoption after the deck arrived. Any one of those is enough to stall a competent consultant’s work.

This matters because the obvious next move (“get a better consultant”) inherits the same engagement design unless the design itself is fixed. A different consultant doing the same flawed engagement produces the same flat dashboard twelve months later. The improvement is in how the engagement is shaped, not only in who is hired.

That framing is also how the founder gets out from under the private “AI doesn’t apply” voice. The technology was probably fine. The consultant was probably competent. The shape of the work was wrong. Naming that does not erase the spend, but it does turn the prior engagement from a verdict into a data point.

What’s the four-question diagnostic on the previous engagement?

Before deciding what’s next, sit with four questions about the previous engagement. They take an hour, ideally with whoever inside the business was closest to the work.

First, what problem was the engagement supposed to solve in business terms? The answer should be a sentence about a specific outcome the business needed, not a sentence about AI capabilities. If it comes out as “we wanted to figure out where AI could help”, the engagement was scoped wrong from the start.

Second, was a measurable outcome agreed in writing? Specifically: a number, a function, and a date. “Reduce our client report turnaround from twelve days to six within ninety days” is a real outcome. “Improve efficiency through AI” is not. If the contract or kick-off doc had no number, neither side could tell whether the work succeeded.

Third, did discovery come before tools? Engagements that select tools in the first two weeks tend to fail. Tools should be the second-to-last decision, after the business questions have been answered. If two specific tools were licensed before the discovery had been written down, the sequence was wrong.

Fourth, who inside the business was named as accountable for adoption after the engagement ended? An AI engagement without a named internal owner of adoption is, in practice, a deck-delivery exercise. Most stalled engagements fail this question. If three or four of the four come back fuzzy, the engagement design is the failure mode, not the consultant or the technology.

What should the next engagement look like?

Smaller and sharper. The temptation after a failed engagement is to either give up entirely or commission an even larger comprehensive piece of work to “do it properly this time”. Both are wrong. The version that works is a tight three-to-six-week engagement with a single measurable outcome attached, run on a real piece of work the business is already doing.

Concretely, that looks like this. One business problem with a number attached. Reduce turnaround time on X by Y percent. Cut admin time on Z by half a day a week. Get one client report to a quality threshold without senior review. The engagement runs on that problem, and the deliverable is whether the number moved within the agreed window. If it did, the next piece of work is scoped from the result. If it didn’t, the diagnosis is now informed and inexpensive.

This shape forces the consultant to be useful or be wrong inside a short window. It also forces the founder to assign internal accountability for adoption from day one, because the success metric depends on it. The asymmetry is favourable: a well-scoped six-week engagement at five to fifteen thousand often outperforms a sprawling six-figure programme that produces a deck.

How should you evaluate the next consultant?

The practitioner filter from twohundred.ai is the sharpest single test. Ask the consultant what specific number they will move for you in the first ninety days. If they cannot name one in thirty seconds, end the call. Real consultants who do this work know the numbers they move and know which ones they don’t. The hesitation tells you everything.

A few other tests, also from the practitioner literature. Ask for two case studies that match your sector and size. Generic case studies are a soft signal of generic capability. Ask the consultant whether they would do a smaller, more sharply scoped first piece before any larger commitment. Anyone unwilling to scope down is selling rather than diagnosing. Ask who inside your business they would want as the named owner of adoption from day one. A consultant who shrugs at that question has not yet learned what makes engagements stick.

The honest application of all of this includes me. If you’re considering a conversation with me, run the same questions. The point of the framework is not to filter for the right answer; it is to filter for the right kind of conversation. A useful consultant welcomes those questions because they make the engagement that follows much more likely to land.

If you’d like to apply that diagnostic to your previous engagement, or to a possible next one, book a conversation.

Sources

  • twohundred.ai 2025 practitioner guide, "How to Hire an AI Strategy Consultant Without Burning Cash": SMEs commonly losing $10,000 to $30,000 on the wrong engagement first; "Ask what specific number [the consultant] will move for you... If they cannot name one in 30 seconds, end the call." Source.
  • Effica/Novusbroker 2025: practitioner observation on the typical failed-engagement pattern, "tools unused, the consultant's recommendations are collecting dust, and the business is running exactly the same way it was before, except now there's a line item in the budget for software nobody touches". Source.
  • Goldman Sachs 10KSB Voices 2026: SME AI consulting challenges named as "insufficient technical skills, navigating a crowded marketplace of tools, and concerns regarding data privacy". Source.
  • WSI 2025: companies working with AI consultants are 2.5 times more likely to achieve sustainable AI success than those going alone. Source.
  • Launch Day Advisors 2025: when to hire an AI strategy consultant, three trigger criteria. Source.
  • Whitehat SEO 2026 evaluation framework: "One relevant case study is worth more than ten generic credentials." Source.
  • Source Global Research (2025). The UK Consulting Market in 2025. Authoritative analysis of UK consulting fee benchmarks, day-rates and category sizing. Source.
  • Boston Consulting Group (2025). Are You Generating Value from AI, The Widening Gap. 60 per cent of firms report almost no material value from AI investment, the asymmetric-risk backdrop for consulting choice. Source.
  • ICAEW. Investment Appraisal, technical guidance for Chartered Accountants. UK reference for opportunity-cost framing in technology-investment decisions. Source.
  • MIT NANDA (August 2025). 95 per cent of GenAI pilots fail to deliver ROI, with specification not technology cited as the primary failure cause. Source.
  • Consultancy.uk. UK consulting industry fees and rates reference. Public reference for UK consulting day-rate ranges by tier. Source.

Frequently asked questions

Is it normal to have spent on an AI consultant and seen no return?

More common than the public conversation suggests. The twohundred.ai practitioner guide references SMEs commonly losing ten to thirty thousand on the first engagement before learning to buy differently. The pattern is rarely a bad consultant; it is usually engagement design that didn't include measurable outcomes or internal accountability for adoption.

How do I diagnose what went wrong with the last engagement?

Run four questions on it. What problem was the engagement supposed to solve in business terms? Was a measurable outcome agreed in writing? Did discovery come before tool selection? Who inside the business was accountable for adoption? If three or four come back fuzzy, the engagement design is the failure, not the consultant or the technology.

What should I ask the next consultant differently?

Ask what specific number they will move for you in the first 90 days. If they cannot name one in thirty seconds, end the call. Ask for two case studies that match your sector and size. Ask whether they would do a smaller, more sharply scoped first piece before any larger commitment. Anyone unwilling to scope down is selling rather than diagnosing.

Does the next engagement have to be another big project?

No. A small, sharply scoped piece of work, three to six weeks, with a single measurable outcome, often outperforms a larger comprehensive engagement. The smaller scope forces the success criteria to be real, gets a quick proof point, and lets the founder evaluate the consultant's work before committing more.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation