The CEO of a 25-person consulting firm announces in the Tuesday all-hands that the firm has decided to bring in a new AI partner. The room nods politely. Nobody asks a question. After the meeting, two senior people send him separate Slack messages about scheduling conflicts. The product lead pings him to flag a “concern about timing”. He has felt it before. Trust has gone, and he is the last person to notice.
The trust deficit after a failed AI rollout is real, and rebuilding it is a precondition for the second engagement, not an optional follow-on. The good news is that the rebuild is more concrete than founders typically expect. There is a working framework for what trust is, and three operational tools for restoring it. None of them require hiring anyone new. All of them work within the existing rhythm of how the firm runs.
Why does staff resistance feel rational after a failed rollout?
Frances Frei and Anne Morriss describe trust as the product of four elements: reliability, empathy, authenticity, and logic. When any one wobbles, trust collapses. AI tools tend to wobble on all four at once. Outputs are uncertain because the model was trained on data that may not reflect current reality, and hallucinations are real. Generic tools do not grasp internal context.
Vendor marketing claims about productivity gains often exceed what people see in practice. The system’s reasoning is opaque, so staff cannot see why one recommendation came back instead of another.
Staff resistance to AI after a failed first engagement is not irrational obstruction. It is a rational response to unearned trust on four dimensions at once. The Society for Industrial and Organisational Psychology’s research on workplace change reinforces the point. Perceived automation risk alone heightens stress and disengagement, even when no displacement occurs. If staff watched the first engagement closely, trying to read whether their role was being automated away, that observation does not reset when the firm announces round two.
The implication for a founder rebuilding trust: the work is to address the four elements concretely, with consistency over time. Statements of intent do not move the needle. Repeated, observable behaviour does.
How do pulse surveys surface sentiment fast?
A pulse survey is short, focused, and frequent: five to ten questions, deployed at intervals to track sentiment over time. Gallup’s framework for employee engagement recommends pulse surveys precisely when an organisation is in a period of change, uncertainty, or improvement. After a stalled AI engagement is exactly that period. The survey gives leadership a reading on what staff are actually experiencing, before the second engagement design is locked.
Useful question wording is direct, behavioural, and grounded. Ask whether staff understand why the organisation is investing in AI, whether they believe the organisation will support them if their role changes, whether they have the skills they need to use the available AI tools, and whether they trust the organisation is making AI decisions in the team’s best interests. These are answerable questions about lived experience, not validation prompts.
Designing the survey in collaboration with an employment lawyer is a small step that pays back. Some questions can inadvertently create expectation liability. A 30-minute review prevents that and makes the questions more useful at the same time. Run the same questions monthly so you see drift, not just one snapshot.
What do skip-level meetings reveal that surveys miss?
A skip-level meeting is a conversation between a senior leader and staff two or three layers below them in the hierarchy. The format works because middle managers, however well-intended, filter what reaches the top. Staff often have insights about how the failed engagement actually unfolded that never made it into a steering committee summary. Skip-levels surface those insights without putting individuals in awkward upward-feedback positions.
Fisher Phillips’s guidance on running skip-level meetings recommends small groups of five to ten staff, with the purpose stated openly: this is a listening session about the AI engagement and what would be different next time. An icebreaker establishes psychological safety. Questions are about experience, not evaluation. “Walk me through your experience when the tool was deployed.” “What would have needed to happen differently for it to feel successful?” “What concerns do you have about a second engagement?”
What typically surfaces is the gap between what middle management thinks happened and what staff actually experienced. That gap is often the diagnostic for why the rollout stalled in the first place. Without skip-levels, the gap stays invisible into the second engagement.
How should the retrospective on the failed engagement go?
A retrospective is a structured group meeting in which a team reflects on a recent project, documents what went well and what did not, and identifies lessons. Atlassian’s post-implementation review template is a useable starting point. The retrospective on a failed AI engagement should involve people from multiple layers.
Specifically: the project sponsor, the IT lead, the operational staff who were expected to use the tool, the vendor representative if a relationship remains, and ideally a neutral facilitator with retrospective experience.
The framing matters. This is a learning exercise, not a blame exercise. The questions are: what were we trying to accomplish, what did we think would happen, what actually happened, what surprised us, what would we do differently, what did we learn about how change works in our organisation. The output is a shared narrative, captured in writing, that the second engagement design can reference.
A well-run retrospective creates two things at once. The first is documented insight. The second, equally important, is a shared experience of being honest about something that did not work. That experience itself shifts trust. Founders who run a real retrospective consistently report that the second engagement starts on different footing.
What does shadow AI tell you about trust?
Shadow AI is best read as diagnostic information. The Writer 2025 report found that 41 percent of Millennial and Gen Z employees admit to “quietly sabotaging” their company’s AI strategy when trust in the process breaks down. By late 2025, around 60 percent of desk workers were using AI in some capacity, often outside the official portfolio.
The use is not random. It tracks problems staff have identified that they believe AI can solve.
The right response is to invite the conversation. Ask staff to document what they are using shadow tools for, what benefits they are getting, and what obstacles they hit when trying to do that work inside the official systems. The patterns that emerge typically reveal three things at once: where the official strategy is missing capability, where communication about available tools has been weak, and where staff autonomy to experiment is being constrained.
Reading shadow AI this way changes its meaning. It stops being a compliance problem and becomes a feedback channel. The four-move response covered in the existing post on shadow AI as feedback is the operational playbook. In the trust-rebuilding moment, what matters is the framing: shadow AI is the team telling you where the official strategy fell short.
How do you close the leadership-to-floor adoption gap?
The Slack Workforce Index from Salesforce captures the most diagnostic single number: 43 percent of executives use AI daily, 35 percent of senior managers, 23 percent of middle managers. If leadership is not actively using the tools the organisation was asked to adopt, staff reasonably conclude the tools are not core to how the firm works. Adoption stalls accordingly.
The fix is direct. Senior leaders commit to daily or near-daily AI tool use for at least 30 days, on a specific business problem, and share the experience honestly. Including the frustrations. Especially the frustrations. When a leader speaks about a tool not working the way they expected, they make it psychologically safer for staff to acknowledge their own confusion rather than performing competence they have not yet built.
The same logic applies to involving frontline staff in the second engagement design. Staff close to operational processes are typically the highest-value source of insight about where AI can create real benefit. Including them in problem identification, pilot testing, and feedback loops produces adoption rates 25 to 35 percent higher than treating deployment as something that happens to staff. The trust rebuild and the engagement design are the same work.
The next post in the cluster covers vendor diligence after a burn. The diagnostic audit usually surfaces trust as one of the priorities, alongside the consolidation work in the previous post.
If you would like to walk through how the trust rebuild might run in your firm, book a conversation.



