The four posture shifts in a second AI engagement

Two people across a desk, one explaining a hand-drawn timeline divided into milestones, the other listening with pen poised over a notebook
TL;DR

A second AI engagement that succeeds is structured differently from the first. Budgets are milestone-gated rather than calendar-distributed, with 15 to 20 percent contingency and 20 to 30 percent of implementation cost reserved for post-go-live operations. Governance includes a real steering committee with weekly meetings during implementation. Staff investment includes a champions network at 5 to 10 percent of users with explicit time allocation. Scope starts smaller, with hard kill criteria and a defined sunset date.

Key takeaways

- Budget posture: milestone-gated, with funding released on named deliverables. 15 to 20 percent contingency. 20 to 30 percent of implementation cost reserved as post-go-live operational support for the first 12 months. - Governance posture: steering committee meeting weekly during implementation, monthly during stabilisation. Executive sponsor, internal project sponsor, IT lead, independent challenge voice. RACI matrix for decision authority before the project starts. - Staff posture: AI Champion Programme at 5 to 10 percent of users, one champion lead per 10 to 20 champions. 20 to 30 percent of each champion's time allocated structurally for the first three months. Training spend roughly 10x the first engagement. - Scope posture: smaller pilot. Single business problem. 50 to 100 users or one team. Defined timeline of 4 to 8 weeks. Hard kill criteria, not aspirational ones. Sunset date defined before the pilot starts. - Prosci's data: excellent change management correlates with 80 percent project objective achievement, against 14 percent for poor change management. - The shifts are not novel methodology. They are the structure most first engagements skipped.

A founder is in a working session with her newly chosen AI partner, halfway into the second engagement scoping. He has handed her a familiar-looking statement of work: 12 weeks, fixed price, three workstreams, deliverables on a Gantt chart. She has been looking at this kind of document for fifteen years. After the previous engagement she suddenly does not know whether it is a good engagement design or just a typical engagement design. She wants to be specific about what should be different.

The honest answer is the goal stays the same. Success is still success. What changes is the structure of how the engagement is funded, governed, staffed, and scoped. Four posture shifts, each with concrete numbers from the change-management and AI-deployment literature. None of them are novel. All of them are what most first engagements skipped.

How should budget posture shift?

The first engagement was probably resourced on a fixed budget across a fixed timeline, with the expectation that scope would be delivered inside both. That model is poorly suited to technology implementation in uncertain environments. The second engagement should use a milestone-based budget instead, with funding released on completion of named deliverables.

Abacum’s framework on milestone-based budgeting positions this as a way of aligning resources with project deliverables to improve accountability and visibility, with predetermined success criteria at each gate.

For an AI engagement, the first milestone is completion of the diagnostic assessment with clear findings about current state and gaps. Funding for the next phase releases on sign-off. The second milestone is completion of a prioritised implementation roadmap and a business case for the specific AI use case. Implementation funding releases only after the roadmap and business case are approved. Each milestone is a natural decision point where the firm can pause, reassess, or redirect.

Top-line should be lower than the first engagement, with explicit 15 to 20 percent contingency. Allocate 20 to 30 percent of implementation cost as ongoing post-go-live operational support for the first 12 months. Treat that as core, not optional. The first engagement that allocated zero post-go-live budget is the engagement where the system goes live, breaks in production, and nobody is funded to fix it.

What does real governance look like the second time?

The first engagement may have had a project manager and not much else. The second engagement should establish an explicit steering committee meeting weekly during implementation and monthly during stabilisation. The Change Compass research on operational metric tracking during change makes the case directly: without weekly visibility, course corrections happen too late to matter.

Steering committee membership should include the executive sponsor (C-level, betting reputation on the engagement landing), the internal project sponsor (the operational leader who owns the system post-go-live), the IT lead, and an independent challenge voice from inside or outside the firm.

The committee reviews more than project metrics. Schedule, budget, and scope are necessary, but they are leading indicators of timing only. The genuinely diagnostic numbers are operational: processing times, quality measures, staff satisfaction. If implementation is causing operational disruption, those metrics surface it before go-live. The committee’s job is to act on that signal, not to wait for the post-implementation review.

Decision authority should be clear before the project starts. A simple RACI matrix (Responsible, Accountable, Consulted, Informed) for the major decisions saves weeks of confusion later. Who decides whether to proceed from pilot to full rollout? Who decides if scope is cut? Who decides if timeline is extended? The answers belong in writing on day one.

Why does staff posture matter more than tooling?

The single most under-resourced area in failed first engagements is the people side of the change. The Lead With AI guidance on champion programmes, drawing on cross-sector deployment data, makes the operational case. Aim for 5 to 10 percent of the initial AI user base to be part of a peer-led champion network. Have one champion lead per 10 to 20 champions.

Select by trusted-network: the people others go to with questions, the ones already helping colleagues informally without being asked. The loudest advocates are usually not the most effective ones.

Champions need explicit time allocation, structurally, not as an add-on to their normal role. Allocate 20 to 30 percent of each champion’s time to champion activities for the first three months, scaling back to 10 percent as the programme matures. Without the time allocation the programme exists on paper and produces nothing.

Training spend in the second engagement should be roughly 10x what the first allocated. The Prosci change-management framework lays out the training stages: awareness training that explains why the change is happening, knowledge training that teaches staff how to use the tools, ability training delivered through hands-on practice during pilots, and reinforcement training that supports sustaining change over time. The 10x figure sounds large until set against the cost of a rollout that does not land. The Prosci data on integrated change management is direct: excellent change management correlates with 80 percent project objective achievement, against 14 percent for poor change management. The training spend is the dominant lever.

How should scope posture change?

Smaller pilot, harder kill criteria, defined sunset. The first engagement may have piloted broadly, creating a situation where the pilot became pseudo-production and the transition to full rollout was unclear. The second engagement should define a specific pilot for a single, well-defined business problem, involving a limited number of users (typically 50 to 100, or one team), with a defined timeline of 4 to 8 weeks ending in a go/no-go decision.

Kill criteria should be hard, not aspirational. “The pilot should achieve adoption of 50 percent or more” is a sentence, not a measurement. “At least 70 percent of pilot participants use the tool at least three times per week for 30 days” is testable. “The tool reduces time to complete the identified process by at least 20 percent, measured by comparing same-process steps for pilot participants versus non-participants” is testable. Define the criteria before the pilot begins and commit to enforcing them regardless of how much money has been spent or how close to success the pilot feels.

A sunset date matters as much as kill criteria. If the pilot meets criteria, when does the firm move to full rollout. If the pilot misses criteria, when does the firm stop and reassess. Without a sunset date, pilots get extended indefinitely because they are “almost there”. The discipline of a defined date prevents that.

How do these shifts work together?

The four postures reinforce each other. Milestone-gated budget gives the steering committee real decision points. Real governance gives the champions network political cover. The champions network creates the adoption that turns the pilot into a meaningful test. The smaller pilot generates the early data that informs the next milestone funding decision.

In failed first engagements, none of the four were in place. The budget was calendar-distributed. Governance was a project manager and a status report. Staff investment was a vendor demo and a slide deck. Scope was the comprehensive transformation. Each gap compounded the others.

In second engagements that succeed, all four are in place from day one. The structure is boring, recognisable, and grounded in the change-management literature. It is exactly the structure that most first engagements skipped because it did not feel necessary at the time.

The next post in the cluster covers what the data actually says about second-engagement success, including the project-size effect from the CHAOS Report and McKinsey’s research on stalled-transformation recovery. The diagnostic audit typically informs which of the four posture shifts deserves the most weight.

If you would like to apply this structure to a second engagement you are scoping, book a conversation.

Sources

  • Abacum 2025: milestone-based budgeting framework, deliverable-funded gates, contingency reserves. Source.
  • Change Compass 2025: operational metric tracking during change implementation, steering committee composition. Source.
  • Lead With AI 2025: AI Champion Program design, sizing, time allocation, selection criteria. Source.
  • Prosci 2025: change management framework (awareness, knowledge, ability, reinforcement). Source.
  • AndChange 2025: Prosci ROI on integrated change management (80 percent vs 14 percent project success). Source.
  • OCM Solution: stage-gate process methodology for technology projects. Source.
  • USDM 2025: project management rules of engagement, 30 to 40 percent change-management budget allocation. Source.
  • MIT NANDA (August 2025). 95 per cent of GenAI pilots fail to deliver ROI, the failure-rate baseline behind recovery work. Source.
  • McKinsey & Company (2024). From Promise to Impact, How Companies Can Measure and Realise the Full Value of AI. Five-layer measurement framework the recovery work rebuilds against. Source.
  • Standish Group, CHAOS Report (2020). 31 per cent of IT projects succeed on contemporary definitions; 50 per cent are challenged; 19 per cent fail. Source.
  • MIT CISR (Woerner, Sebastian, Weill and Kaganer, 2025). Grow Enterprise AI Maturity for Bottom-Line Impact. Stage 3 enterprises grow 11.3pp above industry average, the maturity baseline recovery aims for. Source.
  • Boston Consulting Group (2025). Are You Generating Value from AI, The Widening Gap. Five per cent of future-built firms achieve five times the revenue gains and three times the cost reductions of peers. Source.

Frequently asked questions

What is milestone-based budgeting for an AI engagement?

Funding released upon completion of named deliverables, not distributed across a fixed calendar schedule. First milestone is diagnostic completion. Second is a prioritised roadmap with a business case. Implementation funding releases only after roadmap sign-off. Each milestone is a natural decision point where the firm can pause, reassess, or redirect.

How big should the AI Champion Programme be?

Aim for 5 to 10 percent of the initial AI user base, with one champion lead per 10 to 20 champions. Selection by trusted-network rather than by who is loudest. Allocate 20 to 30 percent of each champion's time structurally for the first three months, scaling back to 10 percent as the programme matures.

What does a real steering committee look like for an AI engagement?

Weekly meetings during implementation, monthly during stabilisation. Members include the executive sponsor (C-level, betting reputation), internal project sponsor (operational owner post-go-live), IT lead, and an independent challenge voice (external or internal). Reviews not just project metrics (schedule, budget, scope) but operational metrics: processing times, quality measures, staff satisfaction.

What should pilot kill criteria look like?

Hard, not aspirational. '70 percent of pilot users use the tool 3+ times per week for 30 days' is testable. 'Achieve adoption of 50 percent' is not. Set the criteria before the pilot starts and commit to enforcing them regardless of how much money has been spent or how close the pilot feels to success.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation