Three things on the first call that should end the AI consultant conversation

Two people in conversation across a small office table, one holding a notebook with handwritten notes
TL;DR

The three biggest documented causes of AI engagement failure (data quality issues, organisational maturity gaps, use-case drift) all show up as detectable behaviours on the first sales call. A consultant who skips data readiness, treats change management as an afterthought, or resists scope documentation is signalling the failure mode that will play out twelve months later. Recognising the signal early saves both the engagement and the spend.

Key takeaways

- The RAND analysis of failed AI projects identifies three repeating failure patterns. Each pattern has a behavioural fingerprint detectable on the first sales call. - Red flag one: no insistence on a data readiness assessment before model work. The "we'll work with whatever data is available" answer is the tell. - Red flag two: change management treated as an afterthought, or as something that will happen naturally because the solution is valuable. - Red flag three: resistance to written scope documentation. A soft answer when asked to commit deliverables, timeline, and assumptions to paper. - These are not theoretical screens. They map directly to the three failure patterns RAND found in 65 documented enterprise AI initiatives.

A founder, twenty people, ninety minutes into a discovery call with an AI consultant. The deck is sharp. The credentials are strong. The proposed approach sounds confident. Three small moments earlier in the call have not quite settled in her mind, and she is not sure whether to trust the discomfort. None of them was a red flag in the obvious sense. None of them was a thing the consultant said wrong. Each one was a thing the consultant said too smoothly, and her instinct read it before her brain caught up.

That instinct is reading three documented failure patterns. The RAND Corporation’s 2025 meta-analysis of 65 enterprise AI initiatives found that 80.3 per cent of AI projects fail to deliver expected business value, and the three patterns behind those failures are surprisingly stable: data quality issues, organisational maturity gaps, and use-case drift. Each one of those patterns leaves a behavioural fingerprint on the first sales call. Once you know what you are listening for, the early-warning signal is louder than the polish.

Why the first call is where most failures begin

The first sales call is the only time the consultant is performing for you rather than working for you. After contracts are signed, the dynamic shifts. The questions you can ask before signing are stronger than the questions you can ask after, because before signing you can simply walk away. The consultants who pass the first call are not always the ones with the best deck. They are the ones who answer three specific kinds of questions in a specific way.

A consultant whose answers point at structural discipline is signalling the same discipline they will bring to delivery. A consultant whose answers point at flexibility, intuition, and “we’ll figure it out as we go” is signalling that scope, data quality, and adoption planning will be loose during the engagement too. Loose during sales becomes loose during delivery, every time.

Red flag one: no insistence on data readiness

The question to ask is direct. What data preparation and validation will you complete before starting model training? A consultant who has a clear, structured answer is signalling appropriate discipline. They will name what gets audited, how, against what dimensions, and how long it takes. A consultant who says they will “work with whatever data is available” or that data quality issues will be addressed iteratively during development is telling you they have not built a process for the most-cited cause of AI project failure.

The pattern matters because data quality is the single biggest reason AI projects fail. Models trained on dirty data learn the noise alongside the signal, and the resulting predictions are unreliable. The fix is not technical heroics during delivery. The fix is a data readiness assessment before model work begins, with documented quality issues, an estimate of remediation effort, and baseline metrics. A consultant who skips this stage is optimising for booking the engagement, not for delivering it.

If the answer to this question on the first call is vague, the engagement is at risk on the most predictable failure mode in the literature.

Red flag two: change management as an afterthought

The second question is also direct. How will you ensure that teams outside the initial pilot team adopt the AI solution after launch? A consultant who has a structured answer is naming stakeholders, planning communication touchpoints, identifying change champions in each team, and defining adoption metrics that will be measured at handoff. A consultant who deflects, or who suggests adoption will happen naturally because the solution is valuable, is missing one of the three weighted top-three success factors in the published research.

The mechanism behind this failure is well-known. A pilot team adopts the solution because they were involved in scoping it and have a stake in the outcome. Other departments do not see the value, lack urgency, and revert to existing workflows. Worse, the pilot team’s enthusiasm sometimes creates political dynamics where other departments resist on principle because they were not consulted. The technical implementation succeeds. The business outcome fails. The deck looks good and the dashboard does not move.

A consultant who treats change management as an afterthought has not addressed this. A consultant who builds it in from day one is signalling the kind of practice that produces real outcomes.

Red flag three: resistance to written scope

The third question takes the form of a request. Can you commit scope, timeline, and assumptions to writing, with clear deliverables, before we proceed? A consultant who readily produces this is demonstrating discipline. A consultant who resists written scope, who suggests scope will emerge as the engagement proceeds, or who treats the request as bureaucratic, is telling you that scope will drift and budget and timeline will overrun.

The failure mode here is named in the research as use-case drift. An engagement begins with a clear problem statement, like “improve sales forecast accuracy”, and ends with the consultant repositioning the engagement as an “AI platform evaluation”. Each scope expansion sounds defensible. Collectively they turn a tight engagement into a sprawling programme. The protection against this is upfront written scope and a documented change-request process. A consultant unwilling to commit to either has shown you their delivery pattern.

What a good consultant looks like under these questions

A consultant who passes these three on instinct is not a unicorn. The pattern is reproducible. They will not feel defensive about the questions, because they have answered them many times for serious buyers. They will probably welcome the questions, because the questions filter out buyers who do not understand what serious AI work requires. They will treat the request for written scope as standard practice, not as a confrontation.

If the first call does not produce that kind of conversation, the engagement has shown you the future. Three structural disciplines in the first ninety minutes is a low bar. A consultant who cannot clear it on instinct is not the consultant for the work.

If you are about to walk into a first call and want to know which questions to listen for in real time, book a conversation.

Sources

  • RAND Corporation 2025 meta-analysis of AI project failure: 80.3 per cent failure rate, three named patterns of failure. Source.
  • Future Business Academy on UK SME AI failure rates: 73 per cent of UK SMEs fail to see meaningful AI ROI in year one, £2.3 billion annual loss. Source.
  • The Thinking Company AI Transformation Partner Evaluation Framework: change management capability weighted at 15 per cent in critical success factor analysis. Source.
  • Source Global Research (2025). The UK Consulting Market in 2025. Authoritative analysis of UK consulting fee benchmarks, day-rates and category sizing. Source.
  • Boston Consulting Group (2025). Are You Generating Value from AI, The Widening Gap. 60 per cent of firms report almost no material value from AI investment, the asymmetric-risk backdrop for consulting choice. Source.
  • MIT NANDA (August 2025). 95 per cent of GenAI pilots fail to deliver ROI, with specification not technology cited as the primary failure cause. Source.
  • ICAEW. Investment Appraisal, technical guidance for Chartered Accountants. UK reference for opportunity-cost framing in technology-investment decisions. Source.
  • Consultancy.uk. UK consulting industry fees and rates reference. Public reference for UK consulting day-rate ranges by tier. Source.

Frequently asked questions

What's the simplest test of whether an AI consultant is going to deliver?

Ask three questions on the first call. First, what data preparation will you complete before model training? Second, how will you ensure teams outside the initial pilot adopt the solution? Third, can you commit scope, timeline, and assumptions to paper for me before we proceed? A consultant who answers all three concretely is signalling discipline. A consultant who deflects on any of the three is signalling the failure mode you will see in twelve months.

Why does data readiness matter so much?

Because most AI projects fail on data quality, not on technical capability. The RAND 2025 analysis names data quality issues as one of three repeating failure patterns. A consultant who plans to start model work on whatever data is available, with the intention of fixing problems iteratively, is accepting a known-high-risk path. The fix is upfront: a data audit, documented quality issues, an estimate of remediation effort, and baseline data quality metrics before model development begins.

What does good change management actually look like in an SME engagement?

A structured plan that identifies which teams will use the solution, names a change champion in each team, plans communication touchpoints across the engagement, and defines adoption metrics that get measured at handoff. It runs alongside technical delivery from day one. Without it, the technical solution succeeds and the business outcome fails.

Is it unreasonable to ask a consultant to commit scope to paper before signing?

No. It is the most basic protection against scope drift. A consultant who is comfortable putting deliverables, timeline, and assumptions in writing has done the thinking. A consultant who resists, or who suggests scope will emerge as the engagement proceeds, is telling you that scope will drift and that timeline and budget will overrun. Asking for written scope is not difficult or hostile. It is the baseline.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation