Why your AI tools are sitting unused

A founder at a desk in a quiet small office, looking at a near-empty usage dashboard on a laptop, a coffee cup nearby, hand resting on the trackpad
TL;DR

Two in three employees describe their employer's AI tools as partial, ineffective, or insufficient (Fyxer 2026), and 88 percent of owners say they want more training. The most common failure mode in stalled AI rollouts is the training, and swapping the tool inherits the same gap.

Key takeaways

- Fyxer's 2026 research finds that two in three employees describe employer-supplied AI tools as "partial, ineffective, or insufficient", and frames the gap as a training gap rather than a product deficiency. - 88 percent of small business owners say they want more training and resources to implement AI successfully (Goldman Sachs 2025). 73 percent say additional access to training and implementation resources would help. The supply and demand sides are pointing at the same gap. - "Training" in this sense is structured practice on real work, with a named internal champion, over weeks. A one-hour Zoom session with the vendor is closer to an introduction. - Before swapping the tool, audit three things: who got trained, on what specifically, and what their definition of "using it well" was. If those answers come back vague, the gap sits at the programme layer. - The version of training that works in a 20 to 50 person services firm is one hour a week for six weeks with one internal champion. Success means the team can do real tasks alone with the tool by week six.

The dashboard says nobody has logged in for six weeks. The seat licences renew at the end of the month. The owner of a 24-person services firm is staring at a quietly accumulating monthly charge for an AI tool that, six months ago, the team was excited about. She is starting to wonder whether she bought the wrong tool, and the conclusion that comes next is almost always the same: find a better tool.

Most of the time, that conclusion is wrong. The data says the failure mode is usually the training, and a different tool would inherit the same gap. This piece is for the owner staring at the quiet dashboard, who has been blaming the technology, while the more useful diagnosis sits somewhere else.

The data is consistent across both sides of the rollout. Two in three employees who use AI tools describe the ones their employer supplies as partial, ineffective, or insufficient. 88 percent of owners want more training and resources for AI implementation. The supply side and the demand side are pointing at the same gap. The label on the gap is training, used in a specific sense.

What does the data actually say about why AI tools sit unused?

The most useful single finding comes from Fyxer’s 2026 research on AI in the workplace. Two in three employees who use AI describe the tools their employer supplies as “partial, ineffective, or insufficient.” Fyxer’s framing is sharp: the gap they see is a training gap rather than a product deficiency. Employees who try a tool briefly, get mediocre results, and return to the methods they already know never learn the techniques the tool needs to actually work.

That framing changes the diagnosis. The same tool, used by the same person, can be partial-ineffective-insufficient one quarter and indispensable the next, depending on what happened in the gap between the two. What happens in that gap is training, in the practical, structured-practice sense.

The Goldman Sachs surveys triangulate the same point from the buyer side. 88 percent of small business owners say they want more training and resources to implement AI successfully. 73 percent say additional access to training and implementation resources would help. The signal from owners is consistent with the signal from employees. The thing being asked for is structured help with adoption.

What does “training” actually mean here?

Most owners hear “training” and think of a one-hour Zoom session run by the vendor. That is closer to an introduction than to training. The training that closes the gap in AI rollouts looks more like the training a new staff member gets in their first three months than the training a software tool gets in its first three hours.

Practically, that means structured practice on real work. A handful of concrete, weekly tasks the team is actually doing, with someone in the firm who has worked out how to do those tasks well with the tool and is now showing the rest. That role is closer to a senior colleague mentoring a junior on a craft than a vendor demo.

It also means feedback. The team needs to be able to say “this didn’t work, here’s what I tried, here’s what came back, what would you do?” and have someone competent answer. Without that loop, the team learns nothing from the rough edges, and the tool quietly becomes another piece of software that “doesn’t really work for us”.

Why does the better-tool instinct make things worse?

Three reasons. First, the new tool inherits the same training gap, because the gap was never about the tool. Second, switching tools resets whatever competence the team had built up, and competence is the thing that was actually missing. Third, the owner’s confidence in the team’s ability to adopt new software takes a hit when the second rollout also stalls.

The “buy a different tool” instinct is also expensive in a less visible way. Every tool change forces a re-examination of integrations, security, governance, and onboarding, none of which the founder wanted to be doing again. The investment that would have closed the original gap, an extra hundred hours of structured practice with the existing tool, is much smaller than the cost of swapping.

This is also why most AI vendor demos look so good. Demos compress structured practice into thirty minutes with someone who already knows the tool. The output looks impressive. What the demo cannot show is the eight weeks of practice required for the team to produce that output without the demo person in the room.

What does an audit of the gap look like?

Before swapping the tool, do a short audit. Three questions, in this order. Who got trained. On what specifically. What was their definition of “using it well” at the end. The whole audit takes an afternoon. The answers determine whether the gap is in the tool or in the rollout.

The first question often returns “two senior people got an hour with the vendor.” An hour with the vendor is closer to an introduction than to training. The second question often returns vague answers, because the training was not pinned to specific tasks. The third question often returns nothing at all, because no one had defined what good adoption looked like, so no one could measure whether it had landed.

If those three answers come back vague, the gap sits at the training-programme layer. The next move is to design a small, specific programme around two or three actual tasks and run it for six weeks before deciding anything else. If the answers come back specific, with named owners and concrete success criteria, then the tool has had a fair test, and a different tool might genuinely be the next move.

What does good training in a 20 to 50 person services firm look like?

Light enough to fit alongside the work. The version I have seen work most often is one hour a week for six weeks, with one internal champion working through real tasks with two or three colleagues at a time. The tasks are the team’s own work. The output is the team’s actual deliverable. Success means the team can do the task alone with the tool by week six.

That programme costs almost nothing in software terms and a noticeable amount in the champion’s time. The investment is mostly attention, which is why owners often skip it. The signal that it has worked is not enthusiasm in week two; it is the dashboard showing routine, regular usage in week eight, after the formal sessions have stopped.

If the dashboard is still empty after such a programme, the diagnosis changes. Either the wrong tasks were chosen, the wrong champion was chosen, or the tool genuinely does not fit the work. The diagnosis is now informed, and the next decision is much sharper.

If your dashboard is empty and you’ve been wondering whether to swap the tool, sit with the three audit questions for an afternoon before doing anything else. Most of the time the answer is to design the training that wasn’t designed the first time, run it for six weeks, and then decide. Sometimes the tool does need to change. The audit makes that call honest.

If you’d like to talk through what a six-week training programme looks like in your firm specifically, book a conversation.

Sources

  • Fyxer 2026 research on AI in the workplace: two in three employees describe employer-supplied AI tools as "partial, ineffective, or insufficient", framed as a training gap rather than a product deficiency. Source.
  • Goldman Sachs 10,000 Small Businesses Voices, October 2025: 88 percent of small business owners want more training and resources to implement AI successfully. Source.
  • Goldman Sachs 10KSB Voices 2026: 73 percent of small business owners say additional access to training and implementation resources would help. Source.
  • JPost 2025: insufficient training and onboarding cited as one of five recurring causes of SME AI implementation failure. Source.
  • Effica/Novusbroker 2025: practitioner observation on the typical post-rollout pattern, "tools unused, the consultant's recommendations are collecting dust, and the business is running exactly the same way it was before". Source.
  • McKinsey & Company (2025). The State of AI Global Survey. 88 per cent of organisations now use AI in at least one function but only 39 per cent report enterprise-level EBIT impact. Source.
  • Boston Consulting Group (2025). Are You Generating Value from AI, The Widening Gap. Five per cent of future-built firms achieve five times the revenue gains and three times the cost reductions of peers. Source.
  • Standish Group, CHAOS Report (2020). 31 per cent of IT projects succeed on contemporary definitions; 50 per cent are challenged; 19 per cent fail. Source.

Frequently asked questions

Why are my AI tools sitting unused after rollout?

Most often it's a training gap, not a tool gap. Two in three employees describe employer-supplied AI tools as partial, ineffective, or insufficient (Fyxer 2026), and the framing identifies it as training. People who try a tool briefly, get mediocre results, and return to familiar methods never learn the techniques the tool needs to actually work.

Should I swap the tool if my team isn't using it?

Not without auditing the training first. A new tool inherits the same training gap, costs more in switching, and resets whatever competence the team had built. If the audit (who got trained, on what, with what success criteria) returns vague answers, design a six-week training programme around real tasks before deciding.

What does good AI training in a small services firm actually look like?

One hour a week for six weeks, with one internal champion who is already getting useful results, working through real tasks with two or three colleagues at a time. The tasks are the team's own work. Success is the team doing the work alone with the tool by week six.

How do I know if the training programme worked?

The signal is not enthusiasm in week two; it is the dashboard showing routine usage in week eight, after the formal sessions have stopped. If usage is still empty by week eight, you have a clean diagnosis: either the wrong tasks were chosen, the wrong champion was chosen, or the tool genuinely doesn't fit. The next decision is now much sharper.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation