What the data actually says about second-engagement success

A founder at a desk reviewing three printed reports laid out side by side, a finger on a number in the middle one, mid-comparison, notebook open with handwritten cross-references
TL;DR

The literature on IT project recovery, digital transformation revival, and change-management effectiveness all converge on the same insight: second engagements succeed when the organisation deliberately addresses what went wrong the first time, scales smaller, and invests in change management at 30 to 40 percent of project cost. CHAOS data: small projects succeed 90 percent of the time, large projects under 10 percent. Prosci: excellent change management drives 80 percent project objective achievement, against 14 percent with poor change management.

Key takeaways

- Standish CHAOS Report: 31 percent of IT projects succeed, 50 percent are challenged, 19 percent fail outright. Small projects succeed at around 90 percent. Large projects succeed at under 10 percent. - McKinsey: more than 70 percent of digital transformations report stalled progress at some point. More than 60 percent of stall causes are factors the organisation can control in the near to medium term. - Prosci research (8,000+ data points): excellent change management correlates with 80 percent project objective achievement. Poor change management correlates with 14 percent. - MIT NANDA: AI deployments using vendor-purchased tools and external partnerships succeed about 67 percent of the time. Internal builds succeed at one-third the rate. - Slack Workforce Index: daily AI users are 81 percent more job-satisfied, 64 percent more productive, 246 percent more likely to feel connected. Adoption follows fit. - The 27 percent who succeed at AI deployment do so deliberately. They start with a defined business problem, partner externally, scale smaller, and invest disproportionately in change management.

A founder asks his accountant a direct question after the failed first engagement. “Honestly, if we do this again, what are the odds it actually works?” His accountant is good at numbers but does not have an answer. He doesn’t either. He starts looking for the data and finds it scattered across three bodies of research that do not usually sit together: project management, digital transformation, and change management. Once they sit together, the answer is more useful than the typical “follow these best practices” response.

The literature on second-attempt success is rich, if you know where to look. Standish Group’s CHAOS Report on IT projects, McKinsey’s research on stalled digital transformations, Prosci’s data on change-management ROI, MIT NANDA on AI specifically, and the Slack Workforce Index on adoption-fit data. Each tells part of the picture. Read together, they say something specific and actionable about what the second engagement needs to do differently to land.

What does the CHAOS Report say about project size and recovery?

The Standish Group CHAOS Report has tracked IT project outcomes since 1994. The most recent data shows 31 percent of projects successful (on time, on budget, full scope), 50 percent challenged (late, over budget, or incomplete scope), and 19 percent failed outright (cancelled before completion). The split that matters more is by size.

Small projects achieve around 90 percent success. Large projects achieve under 10 percent. The size variable is more predictive than the methodology variable.

For an SME staring at a failed first engagement, this is the single most actionable number in the literature. The first engagement was probably scoped large because it felt comprehensive, complete, the right way to do it. The data suggests that exact instinct is the structural reason the engagement failed. Recovery is possible. Recovery typically requires reducing scope to deliver 60 to 70 percent of what was originally planned, extending timelines, and increasing investment in change management, communication, and testing.

The historical trend is also informative. In 1994, only 16 percent of software projects were delivered on time and on budget; 31 percent were cancelled outright. By 2020, the success rate had climbed to 31 percent. Methodology has improved. Project size has not shrunk fast enough. The succeed-by-going-smaller insight is the one to take.

What does McKinsey find about stalled digital transformations?

McKinsey’s research on stalled transformations finds that more than 70 percent of organisations pursuing digital change report progress has slowed or stalled at some point. The more useful finding is what makes that stall recoverable. McKinsey identifies that more than 60 percent of the cited stall causes are factors organisations can control in the near to medium term.

The instinct that “the market shifted” or “regulators changed the goalposts” is usually wrong. The bigger reasons are internal: resourcing, lack of clarity on the strategy, poor strategy quality from the outset.

When organisations successfully restart stalled programmes, what do they do differently? McKinsey’s research highlights a small set of interventions. Building a more rigorous financial model is one. The full version is an ongoing model updated frequently as assumptions prove correct or incorrect. The first-engagement version was usually a one-shot business case at outset, which is why it stopped being useful by month four. Partnering with the operations function is another, treating ops as a co-owner of the redesign rather than a stakeholder to be informed. Replacing the transformation leader is sometimes cited but McKinsey treats this with scepticism, noting that leadership churn often creates more disruption than it resolves.

For the SME founder reading this after a failed engagement, the read is direct. The reasons it stalled are mostly inside the organisation. The work to fix those reasons is mostly inside the organisation as well. External partners help. The decisions stay with the firm.

What does Prosci’s data say about change management ROI?

Prosci’s research base is one of the largest in the field, drawing on more than 8,000 data points from organisations across sectors. The headline finding is direct. Excellent change management correlates with 80 percent of projects meeting or exceeding objectives. Poor change management correlates with 14 percent. The difference is not training spend alone. It is governance, communication clarity, stakeholder engagement, and explicit management of the human experience of change.

That gap, 80 percent against 14 percent, is the single largest swing in the literature. It dwarfs methodology choices, vendor selection, technical decisions. An SME conducting a second engagement should expect to allocate 30 to 40 percent of total programme cost to change management. Not as an add-on. As a core part of the programme. Most failed first engagements allocated single-digit percentages. The disparity is the explanation.

This is also the cleanest argument against the “we already did the change management” reflex. The first engagement may have included a kickoff session, a vendor demo, a help-desk channel. That is not change management in the Prosci sense. It is the introduction. Real change management is sustained, structured, and explicitly funded.

What does MIT NANDA add specifically about AI?

MIT NANDA’s analysis of 150 leadership interviews and 300 public AI deployments adds two findings that matter for the AI-specific case. First, AI deployments using vendor-purchased tools and external partnerships succeed about 67 percent of the time, while internal builds succeed at one-third the rate.

The instinct to build something proprietary “so we own it” is statistically the worst-performing option. Second, the success rate concentrates in deployments that picked a focused use case rather than attempted a broad transformation.

The implication is sharp. For an SME with a failed first engagement, going back to the build-something-internal route is statistically worse than partnering externally. The MIT data also undercuts the “we should consolidate vendors and pick one big platform” reflex. The 67 percent number is for purchased tools embedded with external partnership, not for one-size-fits-all platform plays.

The Slack Workforce Index from Salesforce adds the human-fit data. Daily AI users are 81 percent more satisfied with their jobs, 64 percent more productive, and 246 percent more likely to feel connected at work. When AI genuinely helps people do better work, adoption follows. When it does not, no amount of mandate or rollout will move the curve.

How should you read these together for your own decision?

Six findings, read together. Small projects succeed at around 90 percent. Large projects under 10 percent. Most stall causes are organisationally controllable. Excellent change management drives 80 percent project success against 14 percent for poor. Vendor-purchased and partnership-led AI deployments outperform internal builds three-to-one. Daily users of well-fit AI are dramatically more engaged. The 27 percent who succeed are not lucky. They are deliberately small, externally partnered, change-management-heavy, and adoption-focused.

The implication for a second engagement design is direct. Scope smaller than the first. Partner with an external vendor rather than building internally. Allocate 30 to 40 percent of programme cost to change management. Pick a use case where adoption-fit is observable, not just possible. Build governance and communication discipline into the structure from day one. None of these are guarantees. Together they shift the odds from the headline failure rate to something much more friendly.

The next post in the cluster is the counter-frame: when not to try a second time, covering the four conditions where retrying is the wrong move. The four posture shifts translate the data here into the structural decisions that go into the second engagement’s statement of work.

If you would like to walk through how this data applies to a specific stalled engagement, book a conversation.

Sources

- OpenCommons: Standish Group CHAOS Report data on success / challenge / fail rates, project size effect, historical trend. https://opencommons.org/CHAOS_Report_on_IT_Project_Outcomes - McKinsey 2025: how to restart your stalled digital transformation. 70 percent stall, 60 percent controllable, intervention patterns, financial-model rigour. https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/how-to-restart-your-stalled-digital-transformation - AndChange / Prosci: 80 percent vs 14 percent project success with excellent vs poor change management. https://www.andchange.com/why-project-success-demands-integrated-change-management/ - Fortune reporting on MIT NANDA: vendor-partnership 67 percent success, internal builds at one-third the rate. https://fortune.com/2025/08/18/mit-report-95-percent-generative-ai-pilots-at-companies-failing-cfo/ - Salesforce / Slack Workforce Index 2025: daily AI users' satisfaction, productivity, connection differentials. https://www.salesforce.com/news/stories/daily-ai-workforce-use-growth/ - ACM Queue 2025: enterprise AI governance and rollout discipline. https://queue.acm.org/detail.cfm?id=3687999

Frequently asked questions

Do failed IT projects ever recover successfully?

Yes, with conditions. The Standish Group CHAOS Report shows that recovery typically requires reducing scope (delivering 60 to 70 percent of originally planned scope rather than 100 percent), extending timelines, and investing more rigorously in change management, communication, and testing. Small projects recover at around 90 percent success. Large projects recover at under 10 percent. The size variable matters more than the methodology variable.

What does McKinsey say about stalled digital transformations?

More than 70 percent of organisations report progress has slowed or stalled at some point. Critically, more than 60 percent of cited stall causes are factors the organisation can control in the near to medium term, contradicting the assumption that external pressures dominate. The most common stall causes are resourcing issues, lack of clarity on the digital strategy, and poor strategy quality from the outset.

How big is the change-management impact on success rates?

Prosci research drawing on more than 8,000 data points shows that excellent change management correlates with 80 percent of projects meeting or exceeding objectives. Poor change management correlates with 14 percent. The difference is governance, communication clarity, stakeholder engagement, and explicit management of the human experience of change. The work is what separates the 80 percent from the 14 percent.

Are AI deployments different from general IT projects?

Yes, in two specific ways. MIT NANDA's research shows AI deployments using vendor-purchased tools and external partnerships succeed about 67 percent of the time, while internal builds succeed at one-third the rate. The Slack Workforce Index shows daily AI users are 81 percent more satisfied and 64 percent more productive. The implication: buy external, partner closely, focus on adoption fit, and the data on AI is friendlier than the headline failure rates suggest.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation