Why year-one AI ROI is structurally suspicious

A founder at a desk reviewing a quarterly report and renewal contract in late afternoon light, hand resting on the page
TL;DR

Erik Brynjolfsson's J-curve work shows technology productivity gains lag investment, dipping while workers learn, then climbing. Year-one AI ROI is structurally noisy because the firm may still be in the dip. The benefit lands in months twelve to twenty-four, conditional on the complementary investments (training, process redesign, organisational change) the curve requires.

Key takeaways

- The J-curve is a foundational pattern in technology productivity research, not a vendor reservation. - Year-one ROI sits on top of the dip-to-recovery phase. Reporting first-year ROI for a tool that follows a J-curve reports the cost without the benefit. - The climb only happens with complementary investments in training, process redesign, and organisational change. - Standish CHAOS Report 2024-2025 finds median overrun on projected benefits is roughly 30 percent. A 2x projection typically delivers 1.4x in the originally-stated timeframe. - The MIT NANDA failure pattern: about 40 to 50 percent of SME AI deployments stall at the bottom of the dip because the surrounding work was never resourced.

Picture a founder I’ll call Tom. £4m turnover services firm, nine months into an AI rollout that touches three core processes. Adoption has climbed to roughly 60 percent of the eligible team. Financial impact is still flat. The proposal had said “2x ROI by month twelve.” The actual measurement says 0.9x. The renewal sits on Tom’s desk and he is staring at it. The honest options on the table feel like kill, push through, or quietly admit the original case was wrong.

There is a fourth option, and it depends on whether the firm has been doing the right surrounding work. The J-curve is the reason year-one ROI looks the way it does. Most founders feel ambushed at month nine, and the curve explains why.

What is the J-curve?

The J-curve hypothesis comes from decades of technology productivity research, much of it associated with Erik Brynjolfsson at MIT and Stanford. New tools typically follow a J-shape. There is an initial dip while workers learn. There is a recovery as workers reach competence. Then there is a climb where process redesign and complementary investments turn competence into above-baseline productivity.

The dip is real and measurable. New users are slower. Errors are higher in the early weeks. The cognitive cost of learning is paid before the gain is paid out. The pattern is consistent enough across studies of major technology adoptions that it has been observed for spreadsheets, ERPs, CRMs, and now AI tools.

The J-curve is descriptive, not prescriptive. It tells the firm what shape to expect, not what to do.

Why are year-one ROI claims sitting on the dip?

Most SME AI deployments are still in the dip-to-recovery phase at month twelve. Adoption is climbing. Workers are reaching competence. Process redesign has barely started. Reporting one-year ROI for a tool that follows a J-curve is reporting the cost without the benefit, and the result is a number that looks worse than what the deployment will eventually deliver.

This explains a common pattern. Vendors and consultants project 2x ROI in year one, often in good faith, based on aggregated case studies that mostly draw on year-two and year-three data. Firms then measure year-one carefully and find 0.8x to 1.2x, which feels disappointing relative to the proposal but is consistent with sitting on the dip.

The Standish Group CHAOS Report 2024-2025 finds the median overrun on projected benefits across thousands of technology projects is roughly 30 percent. Firms expecting 2x typically achieve 1.4x in the originally-stated timeframe, often climbing further if given another year and the right surrounding work. The pattern is structural and persistent, not a sign that the technology has failed.

A credible AI proposal frames year-one as transition cost and year-two as the period when ROI is fairly assessed. A proposal that promises 2x in year one is selling the climb on the schedule of the dip.

What complementary investments turn the curve upward?

Brynjolfsson’s research is unambiguous on the conditions that produce the climb. Three complementary investments matter most. Training, so workers move past basic competence into fluent use, typically 20 to 40 hours per user spread over months rather than concentrated at onboarding. Process redesign, so the workflow takes advantage of the AI’s strengths rather than working around them. Organisational change, so roles, accountabilities, and governance shift to match the new capability.

A team using AI inside an unchanged process gets compounded friction rather than compounded productivity. The same logic applies to training and to organisational change. The investments have to land together for the curve to climb; partial investment produces a partial climb, sometimes none at all.

Firms that make these investments see the climb. Firms that do not see the J-curve flatten at the recovery point. The tool reaches competence but never moves above baseline. Year-two and year-three ROI looks similar to year one, because the surrounding conditions for the climb were never built.

This is the most under-appreciated finding in the AI ROI literature. The technology determines whether the climb is possible. The surrounding investments determine whether the climb actually happens.

What does the data say?

The MIT NANDA failure analysis provides the matching pattern from the failure side. Across multiple studies of large technology implementations, the most common cause of project failure is organisational unreadiness. Systems infrastructure is insufficient, processes are not aligned with the technology, or staff training is inadequate. The firm deploys the tool with a subset of users, measures poor productivity improvements, and either abandons it or keeps it as an experimental tool that never scales.

This pattern is visible in roughly 40 to 50 percent of SME AI deployments. The tool is bought, piloted, and never embedded. The curve is real. The firm never walks it because the conditions were never created.

A realistic figure for cumulative ROI on a competently-implemented SME deployment, with the surrounding work done properly, is 1.5x to 2.5x over two years. Year-one shows 1.2x to 1.8x. Year-two shows the climb. That is not the headline benchmark vendors quote, and it is what the research consistently finds when methodology is honest.

What is the honest framing for an SME at month nine?

The dip is real. The number Tom is looking at is not wrong. The question is not whether to renew based on the year-one figure. The question is whether the surrounding work has been done well enough that the climb is coming.

Three diagnostic questions help. Has the team moved past basic competence into fluent use? If yes, training has been done. Has the workflow been redesigned around the AI’s strengths, or are the team using the AI inside the old process? If the workflow has changed, process redesign is in place. Have roles, accountabilities, and governance shifted to reflect the new capability? If yes, organisational change is in place.

If the answer to all three is yes, the curve is set up to climb and the renewal is a confident continue with re-measurement at month eighteen. If the answer is no on any of them, the surrounding work is the issue. Killing the tool will not fix the absence of the surrounding work; the next AI deployment will hit the same dip and the same flat outcome.

The renewal decision is real, and the J-curve gives Tom a structured way to make it. The number tells him where on the curve he is. The diagnostic questions tell him whether the curve is set up to climb.

If you are looking at a flat year-one ROI number and trying to work out whether to renew, push through, or kill, book a conversation and we’ll work through the diagnostic together.

Sources

  • Erik Brynjolfsson at MIT and Stanford: technology productivity J-curve research, showing initial dip during learning, recovery as workers reach competence, climb when process redesign and complementary investments compound. Source.
  • Brynjolfsson research on complementary investments: training (20-40 hours per user spread over months), process redesign, and organisational change as the conditions that turn the recovery into the climb. Source.
  • Standish Group CHAOS Report 2024-2025: median overrun on projected benefits across thousands of technology projects approximately 30%, firms expecting 2x typically achieving 1.4x in originally-stated timeframe. Source.
  • MIT NANDA failure analysis: roughly 40-50% of SME AI deployments never embed past pilot, primarily due to organisational unreadiness rather than technology failure. Source.
  • Realistic cumulative ROI benchmark for competently-implemented SME AI deployment: 1.5x-2.5x over two years (year-one 1.2x-1.8x sitting on the dip, year-two showing the climb where complementary investments are made).
  • McKinsey & Company (2025). The State of AI Global Survey. 88 per cent of organisations now use AI in at least one function but only 39 per cent report enterprise-level EBIT impact, the measurement gap that maturity frameworks address. Source.
  • McKinsey & Company (2024). From Promise to Impact, How Companies Can Measure and Realise the Full Value of AI. Five-layer measurement framework spanning technical performance, adoption, operational KPIs, strategic outcomes, financial impact. Source.
  • MIT CISR (Woerner, Sebastian, Weill and Kaganer, 2025). Grow Enterprise AI Maturity for Bottom-Line Impact. Stage 3 enterprises achieve growth 11.3 percentage points and profit 8.7 percentage points above industry average; Stage 1 firms underperform on both. Source.

Frequently asked questions

What is the Brynjolfsson J-curve in AI productivity?

The J-curve is the pattern Erik Brynjolfsson and colleagues documented across decades of technology productivity research. New tools cause an initial productivity dip while workers learn, then a recovery, then a climb above baseline. The climb requires complementary investments in training, process redesign, and organisational change.

Why are year-one AI ROI numbers misleading?

Because most SME deployments are still in the dip-to-recovery phase at month twelve. The full climb arrives in months twelve to twenty-four, sometimes later. A year-one ROI figure for a tool that follows a J-curve is reporting the cost without the benefit.

What complementary investments does the J-curve require?

Training (so workers reach competence faster), process redesign (so workflows take advantage of the AI's strengths rather than working around them), and organisational change (so accountability and governance shift). Without these, the curve flattens at competence and never climbs.

What if my AI deployment is at month nine and ROI is still flat?

Two questions to ask. Has the firm made the surrounding investments in training, process redesign, and governance? If yes, push through and re-measure at month eighteen. If no, the surrounding work is the issue, not the tool. Killing the tool will not fix the absence of the surrounding work.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation