AI rollouts don't stall in the tech, they stall in the middle

A founder sitting at a kitchen table on a Friday evening with a laptop, a notepad and a glass of wine, looking past the screen in a moment of realisation.
TL;DR

About 70% of AI pilot failures come from people-and-process gaps, not tech, and 68% of middle managers say they are worried about AI's career impact. Resistance shows up as plausible objections that never resolve: data quality first, team isn't ready, pilot next quarter. Until managing AI-augmented work counts as real management experience in your firm, your managers will defend the team-size definition that pays them now. The fix is incentive design, mandated from the top.

Key takeaways

- 70% of AI pilot failures come from people and process gaps, not technology, and 71% of companies cite organisational culture as the top barrier to adoption. - 68% of middle managers report worry about AI's career impact. Slow-walking is the rational response when they are measured on team size, headcount and budget owned. - The resistance pattern is plausible objections that never resolve: data quality first (no date), team not ready (no criteria), pilot next quarter (last quarter still pending). - The fix is structural. "Number of AI-augmented workflows shipped" or "yield produced per head" has to count formally in promotion and compensation, not as a slogan. - The mandate has to come from the founder. The level above the people whose incentives are changing is the only level that can credibly change them.

A founder I know was sitting at his kitchen table on a Friday evening, nine months into an AI rollout that had not actually shipped anything. Sixty staff, mid-market services. The senior leadership team had been quietly enthusiastic from the start. The operators on the ground, the people who would actually use the tools, had been visibly enthusiastic. Every fortnight there was a fresh, plausible reason that nothing had moved. Data quality. Team readiness. The next quarter looked better.

He realised, somewhere between the second glass of wine and the end of the working week, that the people in the middle had been running the same play for nine months. Not maliciously. Rationally. They had been running it on purpose, because they were paid to.

Why has the rollout been quiet for nine months?

The answer is almost never the technology. Research on AI pilot failures puts roughly 70% of the reasons in people and process, and 71% of companies cite organisational culture as the top barrier to adoption. Better tooling and longer training will not move that. The silence is a behaviour produced by an incentive structure, sitting two layers below the founder, that is functioning exactly as designed.

Founders consistently misdiagnose this. They assume sharper internal comms or a more polished vendor demo will tip the rollout into motion. The signal they keep getting back is yes-in-principle followed by no-in-practice, and they read it as a communication problem. It is not. The structure is doing what the structure was set up to do.

What is the slow-roll actually for?

It is for survival. 68% of middle managers say they are worried about AI’s effect on their careers, and the typical manager in your firm is measured, formally or informally, on three things: team size, headcount under direct authority and budget owned. An AI rollout that automates routine work reduces all three. Slow-walking is the rational response.

The vocabulary is consistent across firms once you know what to listen for. “We need better data quality first.” “The team isn’t ready.” “Let’s pilot next quarter.” Each objection is plausible. The pattern is that the objections refresh rather than resolve. There is rarely a date attached. There are rarely readiness criteria written down. Last quarter’s pilot is still pending.

Once you can name the play, you can see it. And you can stop being annoyed at the people running it. They are behaving rationally given how you have defined their job. The annoyance belongs to the job definition, not to them.

How do you fix the incentive, not the messaging?

You fix it in the document the manager actually reads, which is the review form. “Number of AI-augmented workflows shipped this quarter” needs to be a line on it. “Hours of routine work removed and redeployed to higher-value tasks” needs to be a line. “Yield produced per head” needs to be a line. Each one a real input to promotion and pay decisions, not a slogan on a wall.

Until that happens, the manager is being asked to risk the things they are paid for in exchange for praise at an all-hands. They know that is not a fair trade. So do you, when you stop and think about it from their seat.

The corollary matters. You do not solve this by firing managers, and you do not solve it by removing the management layer. Good management matters more when AI is in the picture, because the manager is now responsible for a mixed team of people and automated workflows, with quality and judgement decisions running across both. The point is that the definition of good has to update. Managing AI agents has to count as real management experience inside your business, on a par with managing people. If it does not, your most capable managers will quietly steer their careers away from anything that involves it.

Why does this have to come from you, not the COO?

Because the level above the people whose incentives are changing is the only level that can credibly change them. A COO asking middle managers to redefine their own role is asking them to volunteer their own diminishment. They will not, and they should not be expected to. A founder making the change is restating the deal: the firm now rewards yield produced, not headcount commanded.

The CEO and AI commentator Daniel Shapiro put it bluntly in his analysis of the firms that have made this transition successfully. It comes from the very top, not the CTO, the CEO. The structural reason is simple. The incentive change has to come from the level that controls the incentives, otherwise it lands as a request rather than a redefinition.

This is the part founders flinch at, because it sounds like a mandate, and it is. Voluntary middle-out adoption consistently underperforms top-down change in the published research and in every engagement I have seen up close. The mandate is the change. If you will not say it, no one below you can make it real.

What is the conversation to have on Monday?

Not “are you on board with the rollout”. You will get yes. You have been getting yes for nine months. The honest question is closer to this: what will it take to make sure your role is better in twelve months than it is today, when half of what you currently manage is automated? That question changes the conversation from a defence of the current job to a design of the next one.

It also gives you a real answer. Some managers will tell you exactly what they need: a development path into the AI side of the work, a different way of being measured, a credible story they can take home about why the next twelve months are not a slow goodbye. A few will reveal they do not see a next role for themselves in your firm, which is information you needed anyway. The rest will start telling you, in more useful detail than they ever have before, what the rollout actually requires from you.

If you cannot answer that question with them, you do not have a rollout problem. You have a management-design problem, and AI is the tool currently making it visible. Any other change initiative would have surfaced the same wiring eventually. AI is just faster, which is why it is exposing the wiring now rather than in five years.

If you would like a second pair of eyes on where your AI rollout has actually stalled, and what the redesigned management layer in your firm looks like on the other side, book a conversation.

Sources

- SaaS Europe (2024). The middle management paradox in AI adoption. Source for 70% organisational-failure figure, 68% manager worry stat, and 71% culture-as-top-barrier figure. https://saas.eu.com/post/the-middle-management-paradox - BCG (2025). The AI adoption puzzle: why usage is up but impact is not. Quantifies the gap between AI usage and enterprise-level business impact, and the role of management modelling in adoption that translates. https://www.bcg.com/publications/2025/ai-adoption-puzzle-why-usage-up-impact-not - McKinsey QuantumBlack (2025). The state of AI: how organisations are rewiring to capture value. Evidence that organisational redesign, not tooling, separates AI investment that produces ROI from investment that does not. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai - OpenAI (2025). ChatGPT usage and adoption patterns at work. Frontier-user data shows distributed access beats concentrated expertise for enterprise productivity. https://openai.com/business/guides-and-resources/chatgpt-usage-and-adoption-patterns-at-work/ - Harvard Business Review (2026). AI doesn't reduce work, it intensifies it. Mechanism for why managers experience AI as additional load rather than relief, which feeds the slow-roll. https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it - Anthropic Economic Index (2025). September 2025 report on AI use in the economy. Adoption distribution across roles and seniority, useful for sizing where blockers concentrate. https://www.anthropic.com/research/anthropic-economic-index-september-2025-report - Stanford HAI. The AI overreliance problem: are explanations the solution? Evidence on why human review collapses when AI handles routine cases, relevant to how managers position the risk argument. https://hai.stanford.edu/news/ai-overreliance-problem-are-explanations-solution - Federal Reserve Bank of St Louis (2025). Generative AI, productivity, and the future of work. Macro view on where AI productivity gains accrue, and why mid-organisation friction blunts them. https://www.stlouisfed.org/open-vault/2025/oct/generative-ai-productivity-future-work - EOS Worldwide. The Integrator: a breakdown of the role. Definition of the operating-system role most threatened, on paper, by AI-augmented workflows. https://www.eosworldwide.com/blog/the-integrator-a-breakdown-of-the-role - MSP Radio (2025). AI to drive 50 percent of business decisions by 2027; SMBs struggle with skills and adoption. Trade-press synthesis on the SMB adoption skills gap. https://mspradio.com/podcast/ai-to-drive-50-of-business-decisions-by-2027-smbs-struggle-with-skills-and-adoption/

Frequently asked questions

Isn't slow adoption just sensible caution about a new technology?

Caution is a date and a criterion. "We need better data quality before we ship the agent, by 30 June, measured against these three checks" is caution. "We need better data quality first" with no date and no checks is a stall dressed as caution. The pattern to look for is objections that refresh rather than resolve. Each one looks reasonable. Together they keep the rollout in permanent runway.

Are middle managers being unreasonable, then?

No. They are being rational, given how you have defined their job. If a manager's standing inside the firm depends on team size, headcount and budget owned, an initiative that automates tasks reduces all three. They are defending the role you incentivised them to defend. The answer is to update what good management looks like in your business, not to moralise about the people running the play you set up.

How do I know if this is happening to me?

Three signs. First, your senior leadership team and your operators on the ground are both keen, and nothing is shipping. Second, the reasons for delay change every fortnight but the timeline never moves. Third, when you ask "what would unblock this", you get a list of things that depend on someone else acting first. If two or three of those are true, you have a middle-layer incentive problem, not a tooling problem.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation