The dashboard says nobody has logged in for six weeks. The seat licences renew at the end of the month. The owner of a 24-person services firm is staring at a quietly accumulating monthly charge for an AI tool that, six months ago, the team was excited about. She is starting to wonder whether she bought the wrong tool, and the conclusion that comes next is almost always the same: find a better tool.
Most of the time, that conclusion is wrong. The data says the failure mode is usually the training, and a different tool would inherit the same gap. This piece is for the owner staring at the quiet dashboard, who has been blaming the technology, while the more useful diagnosis sits somewhere else.
The data is consistent across both sides of the rollout. Two in three employees who use AI tools describe the ones their employer supplies as partial, ineffective, or insufficient. 88 percent of owners want more training and resources for AI implementation. The supply side and the demand side are pointing at the same gap. The label on the gap is training, used in a specific sense.
What does the data actually say about why AI tools sit unused?
The most useful single finding comes from Fyxer’s 2026 research on AI in the workplace. Two in three employees who use AI describe the tools their employer supplies as “partial, ineffective, or insufficient.” Fyxer’s framing is sharp: the gap they see is a training gap rather than a product deficiency. Employees who try a tool briefly, get mediocre results, and return to the methods they already know never learn the techniques the tool needs to actually work.
That framing changes the diagnosis. The same tool, used by the same person, can be partial-ineffective-insufficient one quarter and indispensable the next, depending on what happened in the gap between the two. What happens in that gap is training, in the practical, structured-practice sense.
The Goldman Sachs surveys triangulate the same point from the buyer side. 88 percent of small business owners say they want more training and resources to implement AI successfully. 73 percent say additional access to training and implementation resources would help. The signal from owners is consistent with the signal from employees. The thing being asked for is structured help with adoption.
What does “training” actually mean here?
Most owners hear “training” and think of a one-hour Zoom session run by the vendor. That is closer to an introduction than to training. The training that closes the gap in AI rollouts looks more like the training a new staff member gets in their first three months than the training a software tool gets in its first three hours.
Practically, that means structured practice on real work. A handful of concrete, weekly tasks the team is actually doing, with someone in the firm who has worked out how to do those tasks well with the tool and is now showing the rest. That role is closer to a senior colleague mentoring a junior on a craft than a vendor demo.
It also means feedback. The team needs to be able to say “this didn’t work, here’s what I tried, here’s what came back, what would you do?” and have someone competent answer. Without that loop, the team learns nothing from the rough edges, and the tool quietly becomes another piece of software that “doesn’t really work for us”.
Why does the better-tool instinct make things worse?
Three reasons. First, the new tool inherits the same training gap, because the gap was never about the tool. Second, switching tools resets whatever competence the team had built up, and competence is the thing that was actually missing. Third, the owner’s confidence in the team’s ability to adopt new software takes a hit when the second rollout also stalls.
The “buy a different tool” instinct is also expensive in a less visible way. Every tool change forces a re-examination of integrations, security, governance, and onboarding, none of which the founder wanted to be doing again. The investment that would have closed the original gap, an extra hundred hours of structured practice with the existing tool, is much smaller than the cost of swapping.
This is also why most AI vendor demos look so good. Demos compress structured practice into thirty minutes with someone who already knows the tool. The output looks impressive. What the demo cannot show is the eight weeks of practice required for the team to produce that output without the demo person in the room.
What does an audit of the gap look like?
Before swapping the tool, do a short audit. Three questions, in this order. Who got trained. On what specifically. What was their definition of “using it well” at the end. The whole audit takes an afternoon. The answers determine whether the gap is in the tool or in the rollout.
The first question often returns “two senior people got an hour with the vendor.” An hour with the vendor is closer to an introduction than to training. The second question often returns vague answers, because the training was not pinned to specific tasks. The third question often returns nothing at all, because no one had defined what good adoption looked like, so no one could measure whether it had landed.
If those three answers come back vague, the gap sits at the training-programme layer. The next move is to design a small, specific programme around two or three actual tasks and run it for six weeks before deciding anything else. If the answers come back specific, with named owners and concrete success criteria, then the tool has had a fair test, and a different tool might genuinely be the next move.
What does good training in a 20 to 50 person services firm look like?
Light enough to fit alongside the work. The version I have seen work most often is one hour a week for six weeks, with one internal champion working through real tasks with two or three colleagues at a time. The tasks are the team’s own work. The output is the team’s actual deliverable. Success means the team can do the task alone with the tool by week six.
That programme costs almost nothing in software terms and a noticeable amount in the champion’s time. The investment is mostly attention, which is why owners often skip it. The signal that it has worked is not enthusiasm in week two; it is the dashboard showing routine, regular usage in week eight, after the formal sessions have stopped.
If the dashboard is still empty after such a programme, the diagnosis changes. Either the wrong tasks were chosen, the wrong champion was chosen, or the tool genuinely does not fit the work. The diagnosis is now informed, and the next decision is much sharper.
If your dashboard is empty and you’ve been wondering whether to swap the tool, sit with the three audit questions for an afternoon before doing anything else. Most of the time the answer is to design the training that wasn’t designed the first time, run it for six weeks, and then decide. Sometimes the tool does need to change. The audit makes that call honest.
If you’d like to talk through what a six-week training programme looks like in your firm specifically, book a conversation.



