Rebuilding trust after a botched AI rollout

Two people across a kitchen table mid-conversation, one listening with a notebook open and pen down, the other speaking with hands resting on the table
TL;DR

Frances Frei and Anne Morriss describe trust as the product of reliability, empathy, authenticity, and logic. AI tools struggle on all four simultaneously, which is why staff resistance after a failed rollout is rational. Rebuilding trust takes pulse surveys for sentiment, skip-level meetings for unfiltered insight, and structured retrospectives for shared learning. Done in that order, before recommitting to a second engagement.

Key takeaways

- Frei and Morriss define trust as four elements: reliability, empathy, authenticity, logic. AI tools wobble on all four at once. Staff resistance is therefore rational. - Pulse surveys deployed within weeks of a stalled engagement give five-to-ten-question reads on sentiment, repeated monthly to track change. - Skip-level meetings (small groups of 5 to 10, two or three layers below the executive) surface what middle management cannot or will not relay. - Retrospectives on the failed engagement are learning exercises, not blame exercises. Multi-layer participation. Neutral facilitator. Shared narrative as output. - Reading shadow AI as a signal: 41 percent of Millennial and Gen Z employees admit to "quietly sabotaging" company AI strategy when trust is low. The 60 percent of desk workers using AI as of late 2025 are a diagnostic, not a discipline issue. - The adoption gap between executives (43 percent daily AI use), senior managers (35 percent), and middle managers (23 percent) is the single most diagnostic number in the room.

The CEO of a 25-person consulting firm announces in the Tuesday all-hands that the firm has decided to bring in a new AI partner. The room nods politely. Nobody asks a question. After the meeting, two senior people send him separate Slack messages about scheduling conflicts. The product lead pings him to flag a “concern about timing”. He has felt it before. Trust has gone, and he is the last person to notice.

The trust deficit after a failed AI rollout is real, and rebuilding it is a precondition for the second engagement, not an optional follow-on. The good news is that the rebuild is more concrete than founders typically expect. There is a working framework for what trust is, and three operational tools for restoring it. None of them require hiring anyone new. All of them work within the existing rhythm of how the firm runs.

Why does staff resistance feel rational after a failed rollout?

Frances Frei and Anne Morriss describe trust as the product of four elements: reliability, empathy, authenticity, and logic. When any one wobbles, trust collapses. AI tools tend to wobble on all four at once. Outputs are uncertain because the model was trained on data that may not reflect current reality, and hallucinations are real. Generic tools do not grasp internal context.

Vendor marketing claims about productivity gains often exceed what people see in practice. The system’s reasoning is opaque, so staff cannot see why one recommendation came back instead of another.

Staff resistance to AI after a failed first engagement is not irrational obstruction. It is a rational response to unearned trust on four dimensions at once. The Society for Industrial and Organisational Psychology’s research on workplace change reinforces the point. Perceived automation risk alone heightens stress and disengagement, even when no displacement occurs. If staff watched the first engagement closely, trying to read whether their role was being automated away, that observation does not reset when the firm announces round two.

The implication for a founder rebuilding trust: the work is to address the four elements concretely, with consistency over time. Statements of intent do not move the needle. Repeated, observable behaviour does.

How do pulse surveys surface sentiment fast?

A pulse survey is short, focused, and frequent: five to ten questions, deployed at intervals to track sentiment over time. Gallup’s framework for employee engagement recommends pulse surveys precisely when an organisation is in a period of change, uncertainty, or improvement. After a stalled AI engagement is exactly that period. The survey gives leadership a reading on what staff are actually experiencing, before the second engagement design is locked.

Useful question wording is direct, behavioural, and grounded. Ask whether staff understand why the organisation is investing in AI, whether they believe the organisation will support them if their role changes, whether they have the skills they need to use the available AI tools, and whether they trust the organisation is making AI decisions in the team’s best interests. These are answerable questions about lived experience, not validation prompts.

Designing the survey in collaboration with an employment lawyer is a small step that pays back. Some questions can inadvertently create expectation liability. A 30-minute review prevents that and makes the questions more useful at the same time. Run the same questions monthly so you see drift, not just one snapshot.

What do skip-level meetings reveal that surveys miss?

A skip-level meeting is a conversation between a senior leader and staff two or three layers below them in the hierarchy. The format works because middle managers, however well-intended, filter what reaches the top. Staff often have insights about how the failed engagement actually unfolded that never made it into a steering committee summary. Skip-levels surface those insights without putting individuals in awkward upward-feedback positions.

Fisher Phillips’s guidance on running skip-level meetings recommends small groups of five to ten staff, with the purpose stated openly: this is a listening session about the AI engagement and what would be different next time. An icebreaker establishes psychological safety. Questions are about experience, not evaluation. “Walk me through your experience when the tool was deployed.” “What would have needed to happen differently for it to feel successful?” “What concerns do you have about a second engagement?”

What typically surfaces is the gap between what middle management thinks happened and what staff actually experienced. That gap is often the diagnostic for why the rollout stalled in the first place. Without skip-levels, the gap stays invisible into the second engagement.

How should the retrospective on the failed engagement go?

A retrospective is a structured group meeting in which a team reflects on a recent project, documents what went well and what did not, and identifies lessons. Atlassian’s post-implementation review template is a useable starting point. The retrospective on a failed AI engagement should involve people from multiple layers.

Specifically: the project sponsor, the IT lead, the operational staff who were expected to use the tool, the vendor representative if a relationship remains, and ideally a neutral facilitator with retrospective experience.

The framing matters. This is a learning exercise, not a blame exercise. The questions are: what were we trying to accomplish, what did we think would happen, what actually happened, what surprised us, what would we do differently, what did we learn about how change works in our organisation. The output is a shared narrative, captured in writing, that the second engagement design can reference.

A well-run retrospective creates two things at once. The first is documented insight. The second, equally important, is a shared experience of being honest about something that did not work. That experience itself shifts trust. Founders who run a real retrospective consistently report that the second engagement starts on different footing.

What does shadow AI tell you about trust?

Shadow AI is best read as diagnostic information. The Writer 2025 report found that 41 percent of Millennial and Gen Z employees admit to “quietly sabotaging” their company’s AI strategy when trust in the process breaks down. By late 2025, around 60 percent of desk workers were using AI in some capacity, often outside the official portfolio.

The use is not random. It tracks problems staff have identified that they believe AI can solve.

The right response is to invite the conversation. Ask staff to document what they are using shadow tools for, what benefits they are getting, and what obstacles they hit when trying to do that work inside the official systems. The patterns that emerge typically reveal three things at once: where the official strategy is missing capability, where communication about available tools has been weak, and where staff autonomy to experiment is being constrained.

Reading shadow AI this way changes its meaning. It stops being a compliance problem and becomes a feedback channel. The four-move response covered in the existing post on shadow AI as feedback is the operational playbook. In the trust-rebuilding moment, what matters is the framing: shadow AI is the team telling you where the official strategy fell short.

How do you close the leadership-to-floor adoption gap?

The Slack Workforce Index from Salesforce captures the most diagnostic single number: 43 percent of executives use AI daily, 35 percent of senior managers, 23 percent of middle managers. If leadership is not actively using the tools the organisation was asked to adopt, staff reasonably conclude the tools are not core to how the firm works. Adoption stalls accordingly.

The fix is direct. Senior leaders commit to daily or near-daily AI tool use for at least 30 days, on a specific business problem, and share the experience honestly. Including the frustrations. Especially the frustrations. When a leader speaks about a tool not working the way they expected, they make it psychologically safer for staff to acknowledge their own confusion rather than performing competence they have not yet built.

The same logic applies to involving frontline staff in the second engagement design. Staff close to operational processes are typically the highest-value source of insight about where AI can create real benefit. Including them in problem identification, pilot testing, and feedback loops produces adoption rates 25 to 35 percent higher than treating deployment as something that happens to staff. The trust rebuild and the engagement design are the same work.

The next post in the cluster covers vendor diligence after a burn. The diagnostic audit usually surfaces trust as one of the priorities, alongside the consolidation work in the previous post.

If you would like to walk through how the trust rebuild might run in your firm, book a conversation.

Sources

  • Aug Insights / Frei & Morriss four-element trust framework applied to AI. Source.
  • Gallup pulse surveys methodology, frequency, design. Source.
  • Fisher Phillips skip-level meetings format and questions. Source.
  • Atlassian post-implementation review playbook (retrospective format). Source.
  • Lead With AI: 41 percent Writer-report sabotage stat, AI champion programme guidance. Source.
  • Salesforce / Slack Workforce Index: 43 / 35 / 23 percent executive / senior / middle adoption gap. Source.
  • SIOP I-O Psychology on AI-driven organisational change and perceived job insecurity. Source.
  • MIT NANDA (August 2025). 95 per cent of GenAI pilots fail to deliver ROI, the failure-rate baseline behind recovery work. Source.
  • McKinsey & Company (2024). From Promise to Impact, How Companies Can Measure and Realise the Full Value of AI. Five-layer measurement framework the recovery work rebuilds against. Source.
  • Standish Group, CHAOS Report (2020). 31 per cent of IT projects succeed on contemporary definitions; 50 per cent are challenged; 19 per cent fail. Source.

Frequently asked questions

Why is staff resistance to AI rational after a failed engagement?

Trust depends on reliability, empathy, authenticity, and logic. AI tools struggle on all four simultaneously: outputs are uncertain (reliability), generic tools do not grasp internal context (empathy), vendor marketing claims feel inauthentic, and the system's reasoning is opaque (logic). When a first engagement compounds this with broken process commitments, staff entering the second engagement bring deficit trust by default.

What questions should pulse surveys ask after a stalled rollout?

Direct, behaviourally grounded questions like 'I understand why our organisation is investing in AI', 'I believe the organisation will support me if my role changes due to AI', 'I have the skills I need to use the AI available to me', 'I trust the organisation is making AI decisions in our team's best interests'. Five to ten questions, monthly cadence. Designed in collaboration with employment law to avoid creating expectation liability.

How should skip-level meetings be structured for AI trust rebuilding?

Small groups of five to ten staff, two or three layers below the senior leader. Stated purpose: understand what happened during the first engagement, what staff learned, what would increase confidence in a second. Format includes an icebreaker for psychological safety. Questions focus on experience, not evaluation. Surfaces issues staff will not raise through formal channels or to direct managers.

What is shadow AI actually telling you?

That the official strategy has not delivered enough value or has created enough friction that staff are working around it. 41 percent of Millennial and Gen Z employees admit to quietly sabotaging company AI strategy when trust breaks down. The right response is to invite staff to document what they are using shadow tools for and what obstacles they hit in official systems. Diagnostic, not disciplinary.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation