She left the demo last week with a slightly uneasy feeling she could not name. The slides were polished, the speaker was articulate, and her operations director liked the workflow diagrams. She accepted a follow-up anyway, because nothing the vendor said was obviously wrong. A draft proposal is now in her inbox. The unease has not gone away.
The unease is usually data. An owner who has run enough conversations with enough professionals has a calibrated sense for whether the person across the table actually knows what they are doing. With AI, the calibration is harder, because the vocabulary is new. The good news is that the signals are observable and the list is shorter than people expect. Five tells, applied in the first thirty minutes, filter the inexperienced vendors before anyone signs anything.
This is not vendor blacklisting, and it is not “all AI vendors are cowboys” framing. The AI vendor market in 2026 contains real expertise and visible inexperience side by side, often at firms with similar websites. What follows is a recognition skill for the buyer.
What is the cheapest filter for an AI vendor’s competence?
The cheapest filter is the specificity of their answer to a specific operational question. Ask “how does your system handle customer data containing variations the model has not seen during training” and listen. A credible answer names retrieval-augmented generation, a fine-tuning approach with data governance, or acknowledges that data drift is an ongoing concern requiring monitoring. A weak answer says “our advanced AI adapts automatically” and changes the subject.
This is the single most reliable tell. Princeton researchers Arvind Narayanan and Sayash Kapoor describe AI snake oil as systems that do not, and likely cannot, work as advertised, and the language pattern they document is what a non-technical owner can hear in a sales meeting. Vendors with real production experience have lost weekends to data quality, integration breakage, and model behaviour they did not predict. That experience produces specificity. Vendors without it default to abstraction. The gap is audible inside ten minutes.
Why does marketing language for technical concepts matter?
It matters because credible vendors tell you which model they use, why, and what they did to it. A weak vendor will not. The named foundation models in 2026 are GPT-4o from OpenAI, Claude 3.5 Sonnet from Anthropic, Llama from Meta, Mistral, and Gemini from Google. Each has different cost and reasoning characteristics. A serious vendor has a reason for the one they picked and can explain it in plain English.
A vague answer here is diagnostic. “We use our proprietary AI” or “we use the best available models” usually means the vendor is wrapping a public API and adding little engineering on top, while obscuring that fact. There is nothing wrong with wrapping a public API; many useful products do exactly that. There is something wrong with hiding it. The wrapper itself is the work, and a credible vendor describes the wrapper, the model choice, the trade-offs. A vendor who answers a technical question with marketing words has either not done the engineering or is not comfortable describing it.
Why do case studies sometimes not survive a follow-up question?
They do not survive because the case study was constructed for the deck rather than drawn from a real customer the vendor is happy for you to call. The follow-up is the cheapest verification step. “Can I speak to that client, fifteen minutes on the phone.” A credible vendor says yes and provides the contact within days. A weak vendor cites confidentiality, offers a testimonial, or names a contact who has moved on.
Confidentiality is sometimes genuine, and many strong vendors hold enterprise clients under NDA. The diagnostic is not one refusal, it is the pattern across three. Ask for three named references at firms similar to yours in size and sector. A credible vendor produces them. A weak vendor produces one and becomes elusive. When you speak to a reference, ask two questions: what did the vendor get wrong, and what would you have done differently. References rarely volunteer this; when asked directly, they usually answer.
What pricing tells reveal vendor inexperience?
The clearest tell is a quote that contains only the visible cost. Software licensing, API consumption, and compute show up. Data preparation, integration, testing, change management, and monitoring do not. Glean’s research on AI total cost of ownership finds licensing is a fraction of true first-year spend. A vendor whose quote omits the larger lines either does not understand the reality or is choosing not to surface it.
Two questions catch this. What is your assumption about the state of our data on day one, and what is included in the quote if that assumption is wrong. A credible vendor names the assumption explicitly, “we have assumed your customer data is in one system, accessible by API, and reasonably clean,” and quotes a range for the case where it fails. A weak vendor responds in generalities or claims the system handles data integration automatically. The second answer is rarely true in production.
Watch also for usage-based pricing without caps. Unbounded usage pricing is fine in principle, but a serious vendor offers a cap, a usage projection, or a monthly review mechanism so the bill does not surprise you. A vendor who shrugs at the question of cost predictability has either not had the conversation with existing customers or has had it and lost.
When should a “few weeks” timeline make you walk?
When the scope clearly exceeds a few weeks of integration work. Unframe’s practitioner research on AI agent deployment finds that enterprise deployments commonly take seven to twelve months, dominated not by model complexity but by the integration work of connecting the agent to the systems where data actually lives. An owner-managed SME with smaller scope might deploy faster, but the floor is set by integration, not by the vendor’s sales eagerness.
A credible vendor distinguishes a proof-of-concept (rapid, narrow, lower cost) from a production deployment (slower, broader, real money). They might say “we can have a working proof-of-concept in four weeks if your data is accessible by API, and a production deployment across your full scope in four to six months including integration, monitoring, and user acceptance testing.” That is a serious answer. A vendor who promises three weeks regardless of scope has either not deployed into a real production environment or is not telling you what they will quietly cut to hit the date. The Air Canada chatbot case, where a deployed assistant gave a customer incorrect bereavement-fare information and the airline was held liable, is the kind of corner cut when timelines compress and monitoring is dropped.
The five tells are a first-conversation filter, not a substitute for the twelve-question due diligence framework that comes later, and not a substitute for reference checks, contract review, or a properly priced total cost of ownership. They exist so an owner does not spend three hours of diligence on a vendor who would have failed at the first thirty minutes. The unease the buyer could not name at the start of this post is usually one of the five tells in disguise. Now it has a name, and a name makes it cheap to act on. If she wants to talk it through before responding to the proposal, she can book a conversation.



