Picture a founder I’ll call Mike. Ten minutes before a quarterly board meeting, sitting at his desk rehearsing how to answer “the AI question” and noticing he does not have a structured answer. The numbers are decent. The measurement is not airtight. The chair is sceptical of the AI investment generally. The CFO is impatient with anything that sounds vague. Mike has been here before with other technology spends and the moment does not get easier with practice when the underlying frame is not there.
Boards do not ask trick questions about AI spend. The questions are predictable. They are drawn from the same patterns Bain has documented in board-papers research and that Vistage and EO peer groups discuss every month. Once the predictability is named, preparation becomes routine rather than panic.
What are the CFO’s three predictable questions?
The first question is almost always: what did it actually cost us? The CFO wants to know the cost figure was accurate and that no hidden costs landed after the proposal was signed. If the firm implemented an AI tool and discovered three months in that it required a full-time data engineer to maintain, the CFO will read that as a failure of cost discipline.
The hidden cost most often missed is the staff time tax. Senior hours absorbed by interviews, validation, IT integration, and change management workshops are real money. They were not on the consultant’s invoice. They are part of the cost the CFO needs to see. The honest cost figure is licence plus implementation plus the staff time tax plus ongoing maintenance, not just the licence.
The second question is: what measurable business outcome has it delivered, and how was that measured? The CFO wants to see something concrete. “It might be helping” is not sufficient. The structured measurement (hours-saved through time-study, output quality through rubric, financial impact through margin analysis) is what the CFO needs. Methodology disclosure matters as much as the number itself.
The third question is the hardest and the most useful. Was this the best use of the capital? The £30K could have hired a part-time junior staff member for £20K to £25K annually for partial capacity. It could have funded a different software tool at £20K to £50K. It could have stayed as cash earning roughly £3K to £5K interest as opportunity cost. The board should understand the tradeoff explicitly rather than assume AI was the obvious right move.
What are the chair’s two predictable questions?
The chair tends to ask different questions than the CFO, because the chair role is about durable strategy. The first chair question is almost always: does our team believe this investment is real and durable? The chair is asking whether the team is convinced. A divided team or one that has lost confidence is a signal the chair will read regardless of the financial metrics.
This is why the team’s actual experience matters at the board meeting, not just the headline numbers. If sixty percent of users have stopped using the tool by month four, the chair will read that as a problem even if the financial metrics still look acceptable. The behavioural signal speaks to durability in a way the financial signal cannot.
The second chair question is: what is our competitive position if we do or do not expand this? The chair cares whether AI is competitive necessity (in which case not investing is risky) or competitive nice-to-have (in which case ROI requirements should be higher). Are competitors using AI aggressively in our market? Are clients starting to expect it? The market dynamics shape what the right level of investment looks like, and the chair wants to see that the firm has thought about it.
Both chair questions are about whether the firm is being deliberate. Mike can answer them well if he has done the thinking. He cannot improvise good answers in the meeting if he has not.
What is the structured four-piece defence?
The defence is four short sections that map cleanly onto the questions the board is going to ask. What we measured, with the methodology disclosed. What we found, including the gaps where the actual numbers fell short of plan. What surprised us, where the deployment behaved differently than expected. What comes next, with the metrics and targets for the next twelve months. Each section runs three to five sentences, concrete, grounded in evidence.
What we measured. “We measured adoption, output quality assessed via blinded rubric, hours-saved through two-week time-study, and financial impact through margin analysis. The measurement period was twelve months. Methodology has plus or minus 15 percent error bars.”
What we found. “Adoption reached 70 percent of eligible users. Output quality was equivalent to human output on relevance and slightly weaker on edge-case identification. Hours-saved averaged 6 hours per user per week. Net financial impact was £40,000 against a £30,000 investment, representing 1.3x ROI in year one.”
What surprised us. “We expected higher adoption but found training intensity needed to be higher than originally planned. We expected higher quality on edge cases but found systematic gaps; we have implemented a quality-assurance step to address this. The freed-up capacity has been absorbed primarily into work expansion rather than cost reduction, which we did not pre-plan.”
What comes next. “We plan to expand the tool to two additional use cases and to reach 85 percent adoption in the target user group by month eighteen. We will measure the same metrics quarterly. Our target is £75,000 cumulative financial impact by end of year two.”
The structure is honest, grounded, and forward-looking. A CFO or sceptical chair can engage with it. They can ask questions about methodology, they can challenge assumptions, they can probe the surprises. They have something concrete to interrogate. A defence based on assertion (“users love it”, “it must be making a difference”) gives them nothing to engage with and tends to harden scepticism.
What is the opportunity-cost frame for the third CFO question?
The opportunity-cost answer deserves specific preparation. The £30K could have hired a part-time junior staff member at £20K to £25K annually for partial capacity. It could have funded a different software tool at £20K to £50K. It could have funded targeted training for existing staff at £5K to £15K. It could have stayed as cash earning £3K to £5K in interest as opportunity cost.
The honest comparison sometimes lands in favour of AI. The £30K probably delivered more value than earning interest on the cash, and arguably more than a different software tool. The honest comparison sometimes lands against AI. In some cases, the money should have been invested in staff training, process redesign, or basic systems infrastructure rather than a tool that competes for time the firm did not have to give it.
The board should hear the honest comparison either way. The firm that has thought through alternatives and concluded AI was the right call has a stronger defence than the firm that defaults to “AI was obviously the right move.” The honesty about alternatives is what makes the case credible.
Why does transparency about limitations help?
Rather than claiming precision the methodology cannot support (“our hours-saved measurement is accurate to within plus or minus 2 percent”), it is more credible to say: “our time-study methodology has an estimated margin of error of plus or minus 15 percent, and the magnitude of hours-saved is substantial enough that even with this margin of error, the investment has positive ROI.” The structure inverts the default expectation.
Most boards expect measurement to be imperfect, particularly on a relatively new technology. Acknowledging the imperfection and showing the conclusion holds anyway is what builds trust. Overclaiming precision flags the speaker as either naive or dishonest, both of which damage credibility.
The discipline applies to all four sections of the defence. What we measured, with methodology disclosure. What we found, with error bars where relevant. What surprised us, including where we got it wrong. What comes next, with realistic ranges rather than single optimistic numbers.
If you are facing a board meeting where the AI question is going to come up and you would like to think through the defence in advance, book a conversation.



