A founder, twenty people, ninety minutes into a discovery call with an AI consultant. The deck is sharp. The credentials are strong. The proposed approach sounds confident. Three small moments earlier in the call have not quite settled in her mind, and she is not sure whether to trust the discomfort. None of them was a red flag in the obvious sense. None of them was a thing the consultant said wrong. Each one was a thing the consultant said too smoothly, and her instinct read it before her brain caught up.
That instinct is reading three documented failure patterns. The RAND Corporation’s 2025 meta-analysis of 65 enterprise AI initiatives found that 80.3 per cent of AI projects fail to deliver expected business value, and the three patterns behind those failures are surprisingly stable: data quality issues, organisational maturity gaps, and use-case drift. Each one of those patterns leaves a behavioural fingerprint on the first sales call. Once you know what you are listening for, the early-warning signal is louder than the polish.
Why the first call is where most failures begin
The first sales call is the only time the consultant is performing for you rather than working for you. After contracts are signed, the dynamic shifts. The questions you can ask before signing are stronger than the questions you can ask after, because before signing you can simply walk away. The consultants who pass the first call are not always the ones with the best deck. They are the ones who answer three specific kinds of questions in a specific way.
A consultant whose answers point at structural discipline is signalling the same discipline they will bring to delivery. A consultant whose answers point at flexibility, intuition, and “we’ll figure it out as we go” is signalling that scope, data quality, and adoption planning will be loose during the engagement too. Loose during sales becomes loose during delivery, every time.
Red flag one: no insistence on data readiness
The question to ask is direct. What data preparation and validation will you complete before starting model training? A consultant who has a clear, structured answer is signalling appropriate discipline. They will name what gets audited, how, against what dimensions, and how long it takes. A consultant who says they will “work with whatever data is available” or that data quality issues will be addressed iteratively during development is telling you they have not built a process for the most-cited cause of AI project failure.
The pattern matters because data quality is the single biggest reason AI projects fail. Models trained on dirty data learn the noise alongside the signal, and the resulting predictions are unreliable. The fix is not technical heroics during delivery. The fix is a data readiness assessment before model work begins, with documented quality issues, an estimate of remediation effort, and baseline metrics. A consultant who skips this stage is optimising for booking the engagement, not for delivering it.
If the answer to this question on the first call is vague, the engagement is at risk on the most predictable failure mode in the literature.
Red flag two: change management as an afterthought
The second question is also direct. How will you ensure that teams outside the initial pilot team adopt the AI solution after launch? A consultant who has a structured answer is naming stakeholders, planning communication touchpoints, identifying change champions in each team, and defining adoption metrics that will be measured at handoff. A consultant who deflects, or who suggests adoption will happen naturally because the solution is valuable, is missing one of the three weighted top-three success factors in the published research.
The mechanism behind this failure is well-known. A pilot team adopts the solution because they were involved in scoping it and have a stake in the outcome. Other departments do not see the value, lack urgency, and revert to existing workflows. Worse, the pilot team’s enthusiasm sometimes creates political dynamics where other departments resist on principle because they were not consulted. The technical implementation succeeds. The business outcome fails. The deck looks good and the dashboard does not move.
A consultant who treats change management as an afterthought has not addressed this. A consultant who builds it in from day one is signalling the kind of practice that produces real outcomes.
Red flag three: resistance to written scope
The third question takes the form of a request. Can you commit scope, timeline, and assumptions to writing, with clear deliverables, before we proceed? A consultant who readily produces this is demonstrating discipline. A consultant who resists written scope, who suggests scope will emerge as the engagement proceeds, or who treats the request as bureaucratic, is telling you that scope will drift and budget and timeline will overrun.
The failure mode here is named in the research as use-case drift. An engagement begins with a clear problem statement, like “improve sales forecast accuracy”, and ends with the consultant repositioning the engagement as an “AI platform evaluation”. Each scope expansion sounds defensible. Collectively they turn a tight engagement into a sprawling programme. The protection against this is upfront written scope and a documented change-request process. A consultant unwilling to commit to either has shown you their delivery pattern.
What a good consultant looks like under these questions
A consultant who passes these three on instinct is not a unicorn. The pattern is reproducible. They will not feel defensive about the questions, because they have answered them many times for serious buyers. They will probably welcome the questions, because the questions filter out buyers who do not understand what serious AI work requires. They will treat the request for written scope as standard practice, not as a confrontation.
If the first call does not produce that kind of conversation, the engagement has shown you the future. Three structural disciplines in the first ninety minutes is a low bar. A consultant who cannot clear it on instinct is not the consultant for the work.
If you are about to walk into a first call and want to know which questions to listen for in real time, book a conversation.



