Picture a partner I’ll call David. Approaching the twelve-month renewal of an AI tool, knowing he will just sign because there is no structured way to make the call. The proposal had said 2x ROI; the actual number is 1.4x. The team uses the tool. They complain about it less than they did six months ago. They would probably keep using it if it stayed available. None of that adds up to a clear signal about whether the firm should renew, expand, contract, or kill.
This is the reality at twelve months for most SMEs that have bought AI. The decision drifts because there is no event that forces it. A structured review changes the shape of the decision. The discipline is not new; it is borrowed from project-management practice and adapted to the AI context.
What is the post-implementation review discipline?
The post-implementation review (PIR) is a standard discipline taught by PMI, APM, and the various Agile and Scrum schools. The PIR asks four core questions of any technology project. What was the plan? What actually happened? Why was there a variance? What should we do now?
Applied to an AI deployment, the four questions become concrete. What ROI was projected at proposal stage, with what adoption assumption and financial model? What ROI was actually delivered, with what adoption rate and what financial impact? What factors explain the gap between the projected and actual figures? And, given that gap, what is the right go-forward decision?
The discipline matters because it forces the firm to confront the variance rather than absorb it. A typical SME at month twelve has a vague sense that the AI is “working pretty well” without having held the actual outcome up against the original plan. The PIR forces that comparison and the conversation that follows.
What does the EOS sunk-cost check actually ask?
The Entrepreneurial Operating System, widely adopted by owner-led SMEs, includes an annual operating review pattern that contains a useful discipline. EOS asks of any continuing investment: are we continuing this because it is delivering, or because we already paid for it? The reformulation that cuts cleanest in technology investment decisions is direct. “If we had not bought this, and we were making the decision today with today’s knowledge, would we buy it?”
That single question prevents most of the renewal inertia that traps SMEs in mediocre AI investments. If the answer is no, the investment should be killed or significantly contracted, regardless of the sunk cost. If the answer is yes, the investment should continue or expand. The £30K already spent is not relevant to the next twelve months’ decision; only the £30K that will be spent over the next twelve months matters.
The reason this question is so important is that humans default to continuation. Sunk-cost reasoning (“we have already invested, we should see it through”) is the default path. The structured question forces a counter-default that protects the firm from compounding a poor decision.
What are the five review questions?
The five review questions adapted to AI are concrete. First, adoption. Did intended users actually use the tool, and at the rate expected? If adoption fell short, why? Second, quality. Did the AI produce output that met the quality bar set at proposal stage? If quality was lower, on which dimensions, and has the firm adapted to the tradeoff?
Third, hours. Did the measured hours-saved match the projection? If they did not, where did the gap come from: measurement error, quality issues that required rework, or adoption not reaching the level needed for the projected hours-saved? Fourth, financial. Did ROI reach the target set at proposal time? If not, what explains the gap, and is the tool producing financial value at all, even if lower than targeted?
Fifth, learning. What has the firm learned about itself, the tool, and the use case? This question is often the most useful, because it surfaces insights the firm can apply to the next AI investment regardless of what the renewal decision is.
The questions are designed to be answered with evidence. Numbers, dates, named decisions. Not impressions. The review event is the firm’s commitment to operate on evidence rather than recollection.
What are the four decisions?
The output of the review is a single decision in one of four categories. Continue. Existing scope, refreshed 12-month criteria for the next review. The deployment is delivering acceptable value and there is no clear case to expand or narrow.
Expand. Add use cases or users beyond the current scope, with explicit ROI targets for the new work. The deployment has proven its value and the firm is now extending it deliberately rather than letting expansion happen by drift.
Contract. Narrow scope to the use cases where ROI is clear, wind down the rest. Some applications of the tool are working; others are not. The honest response is to keep what works and stop what does not, even if both came in the original proposal.
Kill. End the deployment at month thirteen and evaluate alternatives. The tool has not delivered, the surrounding work has not produced the climb, or the use case turns out to be unsuitable. Killing the tool is not failure; it is honest accounting.
Each decision has a written rationale, named criteria for the next review, and a forward plan. The next renewal cannot drift back into gut feel because the previous review wrote down what success would look like in the next twelve months.
Who should own the review?
A small team. The operations director or partner who championed the original investment, because they know what was promised and what they expected. The finance manager, because they can speak to financial impact and the broader budget context. And an independent person, ideally an external advisor or a partner who was not involved in the original decision, who can challenge the internal team’s rationalisations.
The independent challenger is the role most often skipped, and it is the most important one. The person closest to the original decision has the strongest motivated reasoning to see the deployment as successful. The person furthest from the decision sees the gaps the insiders cannot. Without an independent voice, the review tends to drift toward “continue with minor adjustments” regardless of what the data says.
The written record makes the review accountable. Decision, rationale, evidence consulted, dissenting views, forward criteria. Stored where the next review can find it. The firm that does this consistently for two cycles develops genuine measurement discipline. The firm that runs it once and then forgets the discipline is back at gut feel by the next renewal.
If you are approaching a twelve-month renewal and you would like to set up the review event properly so the decision actually gets made on evidence, book a conversation.



