The 73 percent figure, decomposed: where most first AI engagements fail

A founder at a desk reading a printed report covered in pencilled notes, a closed laptop and coffee cup beside it
TL;DR

The 73 percent UK SME AI failure rate is real but flattened. MIT NANDA, S&P Global, and the budget-allocation data together show where the failures cluster: about 95 percent of generative AI pilots stall because of an organisational gap, not model quality, with more than half of generative AI budget pointed at sales and marketing while the biggest measurable ROI sits in back-office automation.

Key takeaways

- 73 percent of UK SMEs do not see ROI in year one. About 80 percent of UK and US AI projects fail to deliver business value, roughly twice the rate for traditional IT. - MIT NANDA: approximately 5 percent of generative AI pilots achieve rapid revenue acceleration. The remainder stall because of an organisational learning gap, not model quality. - S&P Global: 42 percent of companies abandoned most AI initiatives in 2025, up from 17 percent the year before. The first-engagement failure cohort is accelerating. - More than half of generative AI budgets go to sales and marketing. The biggest measurable ROI sits in back-office automation, which is systematically under-resourced. - The 27 percent who succeed start with a defined business problem, partner externally (vendor-purchased deployments succeed 67 percent of the time, internal builds at one-third), and start small.

A founder reads the “95 percent of AI projects fail” headline for the third time this month. She has just spent £30,000 on an engagement that quietly stopped delivering. She is uncertain whether to feel less alone or more discouraged. The headline figure is useless to her because it does not tell her where her own engagement fell off, which means it does not tell her what to fix.

The data on AI engagement failure is real. It is also flattened by the way it is usually quoted. Underneath the headline number sits a much more specific picture: a few well-defined places where most failures concentrate, and a small set of conditions that distinguish the engagements that did deliver. For a founder staring at a stalled programme, the breakdown is the useful read.

Why do the headline failure rates vary so widely?

The numbers depend on which cohort you measure and what you mean by success. 73 percent of UK SMEs that attempt AI implementation do not see meaningful ROI within the first year. Around 80 percent of UK and US AI projects fail to deliver business value, roughly twice the rate for traditional IT. MIT NANDA’s deeper analysis found about 5 percent of generative AI pilots achieve rapid revenue acceleration.

The wider abandonment trend reinforces the picture. S&P Global Market Intelligence found that 42 percent of companies abandoned most of their AI initiatives in 2025, up from just 17 percent the year before. The cohort of first-engagement failures is accelerating. The market is moving faster than learning cycles, and the gap between the marketing narrative and the operational reality is widening.

What the variation in figures actually tells you is something more useful than the headline noise. The answer to “did AI fail in my organisation” depends on what you set out to measure. If success was “the tool went live”, the data looks better. If success was measurable financial impact within twelve months, much worse. The first diagnostic move is to ask which definition you were betting on.

Where does the failure actually concentrate?

MIT NANDA’s research is the most useful single source on this question. The headline finding gets misquoted as “95 percent of AI fails”, which sounds like a verdict on the technology. The actual finding is more specific. About 95 percent of generative AI pilots stall because of an organisational gap. The authors call this the “GenAI Divide” and attribute it to the learning gap between the tools and the organisations using them.

Generic tools like ChatGPT excel for individuals precisely because of their flexibility. They stall in enterprise use because they do not learn from or adapt to workflows. They never embed into the operational context. A pilot that worked in week one with one team’s enthusiasm fails in month six because the tool never built up to anything that the organisation could not reproduce by hand if it had to.

The implication for a stalled engagement is direct. The question to ask is whether the engagement built any actual learning into the operational workflow, or whether it stopped at “the team got access to the tool”. Most failed first engagements stop at access. The technology was probably fine. The implementation never crossed the divide.

Why does half the GenAI budget go to the wrong departments?

The MIT data reveals a misallocation pattern that explains a lot of stalled engagements. More than half of generative AI budgets go to sales and marketing tools. The biggest measurable ROI sits in back-office automation: cutting business process outsourcing, agency costs, repetitive admin work, internal data extraction. That is the function where the financial returns concentrate, and it is systematically under-resourced in first engagements.

The pattern is a failure of prioritisation. Sales and marketing tools are easier to get budget approval for, easier to demo, and easier for vendors to sell. The harder, less glamorous work of automating contract review, onboarding flows, financial reporting, or client report drafting takes longer to land and demands more internal change-management work. So it gets deferred.

For an SME owner reading their own first-engagement post-mortem, the question to ask is whether the budget went where the ROI actually sits, or where the budget was easiest to allocate. Those are often different places. A second engagement that lands the same way will produce the same flat result.

What do the 27 percent who succeed do differently?

The succeeding minority are not lucky. The patterns are documented across multiple sources and they are concrete. Successful organisations start with a defined business problem and clear success metrics. The technology choice follows from that. They invest in data infrastructure as prerequisite work. They give line managers decision authority instead of centralising in an AI lab. They treat the first deployment as a learning iteration.

Two pieces of data sharpen the picture further. MIT NANDA found that vendor-purchased deployments and external partnerships succeed about 67 percent of the time, while internal builds succeed at one-third the rate. The instinct to “build something proprietary so we own it” is statistically the worst-performing path. The Slack Workforce Index from Salesforce adds the human dimension: daily AI users are 81 percent more job-satisfied, 64 percent more productive, and 246 percent more likely to feel connected at work. Adoption follows fit.

Together these say something simple. The 27 percent succeed because they pick a real problem, partner externally, start small, and let adoption emerge from genuine usefulness. The rest of the recovery arc, including the diagnostic, the consolidation, and the structural posture shifts, is downstream of those choices.

How should you read the data on your own engagement?

The honest read of your own engagement starts with “where in the breakdown am I”. Did the engagement stop at access without crossing the learning divide? Did the budget go to the easier-to-approve function rather than where the ROI sits? Did internal building win out over external partnership? Was there a defined business problem, or did the project go searching for one? Each answer points at a specific repair.

The 73 percent failure rate is real. The 27 percent who succeed do so deliberately, and the conditions that separate them from the rest are documented. The next post in this cluster walks through the six-dimension diagnostic audit that surfaces which of these conditions applied to your first engagement, and which need to be addressed before a second one is structured. The parent piece on the second-time buyer’s situation is the on-ramp if you want the engagement-design diagnostic first.

If you would like to apply that diagnostic to a stalled engagement, book a conversation.

Sources

- Future Business Academy 2025/26: 73 percent UK SME AI failure rate. https://futurebusinessacademy.com/why-smes-are-failing-at-ai/ - Nexer 2026: 80 percent of UK AI projects fail; S&P Global 42 percent abandonment in 2025 up from 17 percent in 2024. https://nexergroup.com/uk/2026/01/20/why-80-of-uk-business-ai-projects-fail/ - MIT NANDA via Fortune 2025: 95 percent stall, 5 percent rapid revenue, the GenAI Divide. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/ - MIT NANDA via SoftwareSeni: vendor-purchased deployments succeed 67 percent of the time, internal builds at one-third the rate. https://www.softwareseni.com/why-95-percent-of-enterprise-ai-projects-fail-mit-research-breakdown-and-implementation-reality-check/ - Salesforce / Slack Workforce Index 2025: daily AI users 81 percent more satisfied, 64 percent more productive, 246 percent more connected. https://www.salesforce.com/news/stories/daily-ai-workforce-use-growth/ - IBM Think 2025: most enterprise AI projects stall before scale. https://www.ibm.com/think/insights/why-most-enterprise-ai-projects-stall-before-scale

Frequently asked questions

Is the 73 percent AI failure rate accurate?

73 percent of UK SMEs that attempt AI implementation do not see meaningful ROI within the first year, per Future Business Academy's 2025 analysis. The figure is real but cohort-dependent. The US comparable, drawing on RAND-cited research, is around 80 percent of AI projects failing to deliver business value, roughly twice the rate for traditional IT. The exact number depends on what you mean by success.

Why do most generative AI pilots stall?

MIT NANDA's analysis of 150 leadership interviews and 300 public deployments found that approximately 95 percent of generative AI pilots stall, with the cause sitting in the learning gap between tools and organisations. Generic tools work for individuals because of flexibility, and stall in enterprise use because they never embed into the operational context. The technology is rarely the failure point.

Where does the AI ROI actually sit for SMEs?

Back-office automation: cutting business process outsourcing, agency costs, repetitive admin work, internal data extraction. MIT NANDA data shows that more than half of generative AI budgets go to sales and marketing tools, while the biggest measurable ROI sits in back-office work, which is systematically under-resourced in first engagements.

What separates the 27 percent who succeed?

Four things: a defined business problem with measurable success metrics; data infrastructure invested in as prerequisite work; external vendor partnership (which succeeds about 67 percent of the time, with internal builds at one-third the rate); and starting small with line managers given decision authority instead of an AI lab. Adoption follows fit.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation