Picture a founder I’ll call Tom. £4m turnover services firm, nine months into an AI rollout that touches three core processes. Adoption has climbed to roughly 60 percent of the eligible team. Financial impact is still flat. The proposal had said “2x ROI by month twelve.” The actual measurement says 0.9x. The renewal sits on Tom’s desk and he is staring at it. The honest options on the table feel like kill, push through, or quietly admit the original case was wrong.
There is a fourth option, and it depends on whether the firm has been doing the right surrounding work. The J-curve is the reason year-one ROI looks the way it does. Most founders feel ambushed at month nine, and the curve explains why.
What is the J-curve?
The J-curve hypothesis comes from decades of technology productivity research, much of it associated with Erik Brynjolfsson at MIT and Stanford. New tools typically follow a J-shape. There is an initial dip while workers learn. There is a recovery as workers reach competence. Then there is a climb where process redesign and complementary investments turn competence into above-baseline productivity.
The dip is real and measurable. New users are slower. Errors are higher in the early weeks. The cognitive cost of learning is paid before the gain is paid out. The pattern is consistent enough across studies of major technology adoptions that it has been observed for spreadsheets, ERPs, CRMs, and now AI tools.
The J-curve is descriptive, not prescriptive. It tells the firm what shape to expect, not what to do.
Why are year-one ROI claims sitting on the dip?
Most SME AI deployments are still in the dip-to-recovery phase at month twelve. Adoption is climbing. Workers are reaching competence. Process redesign has barely started. Reporting one-year ROI for a tool that follows a J-curve is reporting the cost without the benefit, and the result is a number that looks worse than what the deployment will eventually deliver.
This explains a common pattern. Vendors and consultants project 2x ROI in year one, often in good faith, based on aggregated case studies that mostly draw on year-two and year-three data. Firms then measure year-one carefully and find 0.8x to 1.2x, which feels disappointing relative to the proposal but is consistent with sitting on the dip.
The Standish Group CHAOS Report 2024-2025 finds the median overrun on projected benefits across thousands of technology projects is roughly 30 percent. Firms expecting 2x typically achieve 1.4x in the originally-stated timeframe, often climbing further if given another year and the right surrounding work. The pattern is structural and persistent, not a sign that the technology has failed.
A credible AI proposal frames year-one as transition cost and year-two as the period when ROI is fairly assessed. A proposal that promises 2x in year one is selling the climb on the schedule of the dip.
What complementary investments turn the curve upward?
Brynjolfsson’s research is unambiguous on the conditions that produce the climb. Three complementary investments matter most. Training, so workers move past basic competence into fluent use, typically 20 to 40 hours per user spread over months rather than concentrated at onboarding. Process redesign, so the workflow takes advantage of the AI’s strengths rather than working around them. Organisational change, so roles, accountabilities, and governance shift to match the new capability.
A team using AI inside an unchanged process gets compounded friction rather than compounded productivity. The same logic applies to training and to organisational change. The investments have to land together for the curve to climb; partial investment produces a partial climb, sometimes none at all.
Firms that make these investments see the climb. Firms that do not see the J-curve flatten at the recovery point. The tool reaches competence but never moves above baseline. Year-two and year-three ROI looks similar to year one, because the surrounding conditions for the climb were never built.
This is the most under-appreciated finding in the AI ROI literature. The technology determines whether the climb is possible. The surrounding investments determine whether the climb actually happens.
What does the data say?
The MIT NANDA failure analysis provides the matching pattern from the failure side. Across multiple studies of large technology implementations, the most common cause of project failure is organisational unreadiness. Systems infrastructure is insufficient, processes are not aligned with the technology, or staff training is inadequate. The firm deploys the tool with a subset of users, measures poor productivity improvements, and either abandons it or keeps it as an experimental tool that never scales.
This pattern is visible in roughly 40 to 50 percent of SME AI deployments. The tool is bought, piloted, and never embedded. The curve is real. The firm never walks it because the conditions were never created.
A realistic figure for cumulative ROI on a competently-implemented SME deployment, with the surrounding work done properly, is 1.5x to 2.5x over two years. Year-one shows 1.2x to 1.8x. Year-two shows the climb. That is not the headline benchmark vendors quote, and it is what the research consistently finds when methodology is honest.
What is the honest framing for an SME at month nine?
The dip is real. The number Tom is looking at is not wrong. The question is not whether to renew based on the year-one figure. The question is whether the surrounding work has been done well enough that the climb is coming.
Three diagnostic questions help. Has the team moved past basic competence into fluent use? If yes, training has been done. Has the workflow been redesigned around the AI’s strengths, or are the team using the AI inside the old process? If the workflow has changed, process redesign is in place. Have roles, accountabilities, and governance shifted to reflect the new capability? If yes, organisational change is in place.
If the answer to all three is yes, the curve is set up to climb and the renewal is a confident continue with re-measurement at month eighteen. If the answer is no on any of them, the surrounding work is the issue. Killing the tool will not fix the absence of the surrounding work; the next AI deployment will hit the same dip and the same flat outcome.
The renewal decision is real, and the J-curve gives Tom a structured way to make it. The number tells him where on the curve he is. The diagnostic questions tell him whether the curve is set up to climb.
If you are looking at a flat year-one ROI number and trying to work out whether to renew, push through, or kill, book a conversation and we’ll work through the diagnostic together.



