A founder asks his accountant a direct question after the failed first engagement. “Honestly, if we do this again, what are the odds it actually works?” His accountant is good at numbers but does not have an answer. He doesn’t either. He starts looking for the data and finds it scattered across three bodies of research that do not usually sit together: project management, digital transformation, and change management. Once they sit together, the answer is more useful than the typical “follow these best practices” response.
The literature on second-attempt success is rich, if you know where to look. Standish Group’s CHAOS Report on IT projects, McKinsey’s research on stalled digital transformations, Prosci’s data on change-management ROI, MIT NANDA on AI specifically, and the Slack Workforce Index on adoption-fit data. Each tells part of the picture. Read together, they say something specific and actionable about what the second engagement needs to do differently to land.
What does the CHAOS Report say about project size and recovery?
The Standish Group CHAOS Report has tracked IT project outcomes since 1994. The most recent data shows 31 percent of projects successful (on time, on budget, full scope), 50 percent challenged (late, over budget, or incomplete scope), and 19 percent failed outright (cancelled before completion). The split that matters more is by size.
Small projects achieve around 90 percent success. Large projects achieve under 10 percent. The size variable is more predictive than the methodology variable.
For an SME staring at a failed first engagement, this is the single most actionable number in the literature. The first engagement was probably scoped large because it felt comprehensive, complete, the right way to do it. The data suggests that exact instinct is the structural reason the engagement failed. Recovery is possible. Recovery typically requires reducing scope to deliver 60 to 70 percent of what was originally planned, extending timelines, and increasing investment in change management, communication, and testing.
The historical trend is also informative. In 1994, only 16 percent of software projects were delivered on time and on budget; 31 percent were cancelled outright. By 2020, the success rate had climbed to 31 percent. Methodology has improved. Project size has not shrunk fast enough. The succeed-by-going-smaller insight is the one to take.
What does McKinsey find about stalled digital transformations?
McKinsey’s research on stalled transformations finds that more than 70 percent of organisations pursuing digital change report progress has slowed or stalled at some point. The more useful finding is what makes that stall recoverable. McKinsey identifies that more than 60 percent of the cited stall causes are factors organisations can control in the near to medium term.
The instinct that “the market shifted” or “regulators changed the goalposts” is usually wrong. The bigger reasons are internal: resourcing, lack of clarity on the strategy, poor strategy quality from the outset.
When organisations successfully restart stalled programmes, what do they do differently? McKinsey’s research highlights a small set of interventions. Building a more rigorous financial model is one. The full version is an ongoing model updated frequently as assumptions prove correct or incorrect. The first-engagement version was usually a one-shot business case at outset, which is why it stopped being useful by month four. Partnering with the operations function is another, treating ops as a co-owner of the redesign rather than a stakeholder to be informed. Replacing the transformation leader is sometimes cited but McKinsey treats this with scepticism, noting that leadership churn often creates more disruption than it resolves.
For the SME founder reading this after a failed engagement, the read is direct. The reasons it stalled are mostly inside the organisation. The work to fix those reasons is mostly inside the organisation as well. External partners help. The decisions stay with the firm.
What does Prosci’s data say about change management ROI?
Prosci’s research base is one of the largest in the field, drawing on more than 8,000 data points from organisations across sectors. The headline finding is direct. Excellent change management correlates with 80 percent of projects meeting or exceeding objectives. Poor change management correlates with 14 percent. The difference is not training spend alone. It is governance, communication clarity, stakeholder engagement, and explicit management of the human experience of change.
That gap, 80 percent against 14 percent, is the single largest swing in the literature. It dwarfs methodology choices, vendor selection, technical decisions. An SME conducting a second engagement should expect to allocate 30 to 40 percent of total programme cost to change management. Not as an add-on. As a core part of the programme. Most failed first engagements allocated single-digit percentages. The disparity is the explanation.
This is also the cleanest argument against the “we already did the change management” reflex. The first engagement may have included a kickoff session, a vendor demo, a help-desk channel. That is not change management in the Prosci sense. It is the introduction. Real change management is sustained, structured, and explicitly funded.
What does MIT NANDA add specifically about AI?
MIT NANDA’s analysis of 150 leadership interviews and 300 public AI deployments adds two findings that matter for the AI-specific case. First, AI deployments using vendor-purchased tools and external partnerships succeed about 67 percent of the time, while internal builds succeed at one-third the rate.
The instinct to build something proprietary “so we own it” is statistically the worst-performing option. Second, the success rate concentrates in deployments that picked a focused use case rather than attempted a broad transformation.
The implication is sharp. For an SME with a failed first engagement, going back to the build-something-internal route is statistically worse than partnering externally. The MIT data also undercuts the “we should consolidate vendors and pick one big platform” reflex. The 67 percent number is for purchased tools embedded with external partnership, not for one-size-fits-all platform plays.
The Slack Workforce Index from Salesforce adds the human-fit data. Daily AI users are 81 percent more satisfied with their jobs, 64 percent more productive, and 246 percent more likely to feel connected at work. When AI genuinely helps people do better work, adoption follows. When it does not, no amount of mandate or rollout will move the curve.
How should you read these together for your own decision?
Six findings, read together. Small projects succeed at around 90 percent. Large projects under 10 percent. Most stall causes are organisationally controllable. Excellent change management drives 80 percent project success against 14 percent for poor. Vendor-purchased and partnership-led AI deployments outperform internal builds three-to-one. Daily users of well-fit AI are dramatically more engaged. The 27 percent who succeed are not lucky. They are deliberately small, externally partnered, change-management-heavy, and adoption-focused.
The implication for a second engagement design is direct. Scope smaller than the first. Partner with an external vendor rather than building internally. Allocate 30 to 40 percent of programme cost to change management. Pick a use case where adoption-fit is observable, not just possible. Build governance and communication discipline into the structure from day one. None of these are guarantees. Together they shift the odds from the headline failure rate to something much more friendly.
The next post in the cluster is the counter-frame: when not to try a second time, covering the four conditions where retrying is the wrong move. The four posture shifts translate the data here into the structural decisions that go into the second engagement’s statement of work.
If you would like to walk through how this data applies to a specific stalled engagement, book a conversation.



