Four questions to answer before you buy any AI tool or service

An owner at her kitchen table writing on a printed page of four questions, her closed laptop pushed aside and a mug of tea next to her, late afternoon light through the kitchen window
TL;DR

Before any AI vendor conversation, an owner-operator should answer four questions in writing. What is the actual job, named in operational terms. Who will use the result, and what does their day look like with it. What does success look like in twelve weeks, named as something you could screenshot. And what is the realistic budget envelope, including integration, staff time and ongoing support. Answer these four, the vendor conversation changes shape.

Key takeaways

- The most expensive AI buying mistakes happen before the first demo, because the owner started shopping before she finished framing what she actually needed. - Question one: name the job in operational terms, not "we want to use AI". A specific baseline, a target outcome, and a timeline are the test of whether you have framed the job or only the wish. - Question two: name the user. The same tool deployed badly into people who were not asked, measured on metrics the tool now distorts, fails reliably regardless of how good the technology is. - Question three: name what success looks like in twelve weeks as a measurable change you could screenshot. RAND's 2025 work found 73 percent of failed AI projects had no agreed definition of success at the point of signing. - Question four: name the full budget envelope. Glean's TCO research shows 30 to 40 percent first-year underestimation is typical, with hidden costs across data preparation, integration, training and ongoing support.

The owner of an 18-person firm has her third AI vendor demo of the week booked for Thursday afternoon. The first two were both impressive. They were also, she now realises, demonstrations of slightly different products solving slightly different problems, neither of which she had written down in advance. By the end of demo two she could feel the shape of the question shifting. Each vendor was, politely and competently, reframing what she needed so their product was the answer. That is not bad faith. It is what demos are designed to do.

The expensive mistake in AI buying for owner-operated businesses is almost never the wrong product. It is starting to shop before finishing the framing. Four questions, answered in writing before any vendor conversation, change the dynamic of every demo that follows. This post is those four questions and how to use them.

What are the four questions to answer before buying AI?

The four questions are the framing layer above any vendor conversation. What is the actual job, named in operational terms rather than as a wish. Who will use the result, and what does their day look like with it versus without it. What does success look like in twelve weeks, named as something you could screenshot. And what is the full budget envelope, including the bills that do not appear in headline pricing.

Answer those four first, the rest of the buying cycle becomes a series of tests against fixed criteria rather than a series of demos that each reframe the question. The framing is cheap to do, and the cost of skipping it tends to land at the worst possible moment in the contract.

Why does it matter for your business?

It matters because the alternative is slow, expensive drift. RAND’s 2025 analysis of AI project failures found the majority had no agreed definition of success at the point of signing, and that leadership misalignment was the dominant cause of value not landing. For an owner-operated firm with no procurement team, the answer to “did we ask the right questions up front” is the answer to “did this project work”.

The four questions are also what make vendor conversations productive. A vendor who has read your written framing and pushed back on it constructively is showing you exactly how they will work after the contract is signed. A vendor who agrees to everything and steers the conversation back to their feature list is also showing you that. The framing is what lets you read the signal.

Where will you actually meet each question in practice?

You will meet them in four distinct moments, and the moments matter as much as the questions. Question one shows up when you sit down to write what frustrates you about a current process, without using the word “AI” anywhere in the description. If you cannot describe the job with a baseline and a target outcome, you have framed a wish, not a job.

A working answer to question one names the baseline (how long does it take now, how many people, what is the error rate) and the target outcome (what would meaningfully better look like, by when). If the only way you can describe it is “we want to use AI to do reconciliation faster”, you have not finished framing.

Question two shows up in the conversation with the people who would actually use the tool. Adoption failures in owner-operated firms are commonly not technical. They happen because the tool was specified by the owner, demoed to the owner, and signed off by the owner, then handed to the team on Monday morning with no involvement in the decision. Prosci’s ADKAR model breaks the individual change sequence into five stages: Awareness, Desire, Knowledge, Ability, Reinforcement. Staff who skip Desire because nobody asked them reliably stall at Knowledge, regardless of how good the product is.

Question three shows up when you try to write down what twelve weeks of using the new tool would actually change. A screenshot test is a useful discipline here. If the answer is “monthly reconciliation goes from three days to one with fewer than three errors”, you can screenshot that in a reporting tool at the twelve-week mark and the answer is clear. If the answer is “the team feels more confident with AI”, you cannot, and the project will quietly run on without ever being called.

Question four shows up across the full buying cycle, not just at the headline pricing line. Glean’s 2025 work on total cost of ownership for AI tools found that 30 to 40 percent first-year underestimation is typical, driven by costs vendors are unlikely to lead with: integration into existing systems, data preparation time, staff training, ongoing usage charges that scale with volume, and the change-management work that determines whether the tool gets used at all. For an owner-operated firm a 35 percent overrun on a £30,000 commitment is the difference between a sound investment and a regretted one.

When should you stop and answer these versus push on with a vendor conversation?

Stop and answer them whenever you cannot already write all four in two pages from memory. The cost of pausing for a week to do the framing is one cancelled demo and an hour of internal conversation. The cost of skipping it and signing the wrong contract is twelve to twenty-four months of work that needs unwinding, plus the firm’s appetite for trying again afterwards.

The exception is when you have already answered the four questions for a similar buying decision recently and the framing genuinely transfers. A second per-seat SaaS purchase in the same workflow can reasonably borrow the framing from the first one. A move from per-seat to usage-based pricing, or from a SaaS tool to a consultant-led build, cannot. The shape of the buying decision has changed enough that the old answers will mislead.

The other moment to pause is when a vendor pushes back hard on your written framing and you find yourself agreeing instinctively. Put the call on hold for forty-eight hours and check with the team whether the vendor’s reframe matches reality, or whether you are being talked out of a question that was actually right.

The four-questions framing sits underneath every other decision in an AI buying cycle, and two related pieces in this catalogue extend it directly. Briefing AI like a contractor covers what to do once a vendor is chosen and the work needs to be scoped concretely. The sibling decision-guide on choosing between a consultant, an agency and a SaaS tool covers the path choice that the four answers usually point you towards.

The cluster’s pillar piece on buying AI for owner-operated businesses places the four questions inside the wider discipline of small-firm vendor management. The discipline is the same shape an enterprise procurement team uses, scaled down to two or three people making the call across a few weeks rather than a few quarters.

If you are looking at three open AI vendor conversations on your desk, the next move is not to book a fourth. It is to spend an hour writing the four answers in plain English and sending them to whoever else has to live with the decision. The conversations that follow will be different in shape and shorter in length. Book a conversation if you want a second pair of eyes on the framing before any of those demos start.

Sources

- RAND Corporation (2025). The Root Causes of Failure for Artificial Intelligence Projects, the analysis behind the widely cited figure that more than 80 percent of AI projects fail to deliver intended value. https://www.rand.org/pubs/research_reports/RRA2680-1.html - Christensen, Hall, Dillon and Duncan (2016). Know your customers' jobs to be done, Harvard Business Review, the foundational text for naming the job rather than the solution. https://hbr.org/2016/09/know-your-customers-jobs-to-be-done - MIT Sloan Management Review (2024). The human side of AI adoption, lessons from the field, on why adoption stalls when AI does not meet people inside the systems they already use. https://sloanreview.mit.edu/article/the-human-side-of-ai-adoption-lessons-from-the-field/ - Prosci. The ADKAR change model, the practical framework for the individual change process staff have to complete for any new tool to land. https://www.prosci.com/methodology/adkar - McKinsey (2025). The state of AI, the survey finding that companies seeing the most value set growth or innovation as explicit objectives alongside efficiency. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai - Glean (2025). How to budget for the total cost of ownership of AI solutions, the underpinning research for the 30 to 40 percent first-year underestimation figure. https://www.glean.com/perspectives/how-to-budget-for-the-total-cost-of-ownership-of-ai-solutions - Andreessen Horowitz (2020). The new business of AI and how it is different from traditional software, on why AI gross margins sit at 50 to 60 percent rather than the SaaS 60 to 80 percent and what that means for per-transaction cost over time. https://a16z.com/the-new-business-of-ai-and-how-its-different-from-traditional-software/ - Bain and Company (2024). Generative AI virtually ubiquitous in global business as the technology spreads at near-unprecedented rate, the survey behind "75 percent meeting or exceeding expectations" alongside the variation in actual outcomes. https://www.bain.com/about/media-center/press-releases/2024/generative-ai-virtually-ubiquitous-in-global-business-as-the-technology-spreads-at-a-near-unprecedented-rate--bain--company-proprietary-survey/ - UK Government (2021). Guidelines for AI procurement, the public-sector reference for budgeting data residency, security and compliance elements upfront. https://assets.publishing.service.gov.uk/media/60b356228fa8f5489723d170/Guidelines_for_AI_procurement.pdf - Civil Resolution Tribunal of British Columbia (2024). Moffatt v Air Canada, the chatbot ruling that made vendor liability concrete for owners deploying customer-facing AI. https://www.litigate.com/whose-responsibility-is-it-anyway-chatbots-and-legal-issues-in-moffatt-v-air-canada/pdf

Frequently asked questions

Why answer these in writing before the first demo?

Because every vendor demo subtly reframes the problem to fit the product. That is not bad faith on the vendor's part, it is what demos are designed to do. If you have written down the job, the user, the success metric and the budget envelope before the call, the demo serves your framing rather than replacing it. You can ask "can your tool do this specific thing for these specific people inside this budget" and the conversation gets concrete in five minutes rather than five weeks.

What if I genuinely do not know the answer to question three yet?

Then the answer to "should I buy now" is probably no, or at least not yet. If you cannot describe what success looks like in twelve weeks in terms you could screenshot, you are still in the framing phase and the right next move is a short internal conversation with the team who would use the tool, not another vendor demo. Buying without an agreed success metric is how firms end up six months in with a tool nobody uses and no way to call the question.

Should the answers be perfect before I talk to anyone?

No. They should be honest. A two-page answer to the four questions that names what you actually know and what you genuinely do not is more useful than a polished version that pretends to certainty you do not have. Vendors worth working with respond well to "here is where we are clear, here is where we still have questions, can your tool help us close the gap". Vendors who push back on that framing are telling you something important about how they will work after the contract is signed.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation