A founder is in a working session with her newly chosen AI partner, halfway into the second engagement scoping. He has handed her a familiar-looking statement of work: 12 weeks, fixed price, three workstreams, deliverables on a Gantt chart. She has been looking at this kind of document for fifteen years. After the previous engagement she suddenly does not know whether it is a good engagement design or just a typical engagement design. She wants to be specific about what should be different.
The honest answer is the goal stays the same. Success is still success. What changes is the structure of how the engagement is funded, governed, staffed, and scoped. Four posture shifts, each with concrete numbers from the change-management and AI-deployment literature. None of them are novel. All of them are what most first engagements skipped.
How should budget posture shift?
The first engagement was probably resourced on a fixed budget across a fixed timeline, with the expectation that scope would be delivered inside both. That model is poorly suited to technology implementation in uncertain environments. The second engagement should use a milestone-based budget instead, with funding released on completion of named deliverables.
Abacum’s framework on milestone-based budgeting positions this as a way of aligning resources with project deliverables to improve accountability and visibility, with predetermined success criteria at each gate.
For an AI engagement, the first milestone is completion of the diagnostic assessment with clear findings about current state and gaps. Funding for the next phase releases on sign-off. The second milestone is completion of a prioritised implementation roadmap and a business case for the specific AI use case. Implementation funding releases only after the roadmap and business case are approved. Each milestone is a natural decision point where the firm can pause, reassess, or redirect.
Top-line should be lower than the first engagement, with explicit 15 to 20 percent contingency. Allocate 20 to 30 percent of implementation cost as ongoing post-go-live operational support for the first 12 months. Treat that as core, not optional. The first engagement that allocated zero post-go-live budget is the engagement where the system goes live, breaks in production, and nobody is funded to fix it.
What does real governance look like the second time?
The first engagement may have had a project manager and not much else. The second engagement should establish an explicit steering committee meeting weekly during implementation and monthly during stabilisation. The Change Compass research on operational metric tracking during change makes the case directly: without weekly visibility, course corrections happen too late to matter.
Steering committee membership should include the executive sponsor (C-level, betting reputation on the engagement landing), the internal project sponsor (the operational leader who owns the system post-go-live), the IT lead, and an independent challenge voice from inside or outside the firm.
The committee reviews more than project metrics. Schedule, budget, and scope are necessary, but they are leading indicators of timing only. The genuinely diagnostic numbers are operational: processing times, quality measures, staff satisfaction. If implementation is causing operational disruption, those metrics surface it before go-live. The committee’s job is to act on that signal, not to wait for the post-implementation review.
Decision authority should be clear before the project starts. A simple RACI matrix (Responsible, Accountable, Consulted, Informed) for the major decisions saves weeks of confusion later. Who decides whether to proceed from pilot to full rollout? Who decides if scope is cut? Who decides if timeline is extended? The answers belong in writing on day one.
Why does staff posture matter more than tooling?
The single most under-resourced area in failed first engagements is the people side of the change. The Lead With AI guidance on champion programmes, drawing on cross-sector deployment data, makes the operational case. Aim for 5 to 10 percent of the initial AI user base to be part of a peer-led champion network. Have one champion lead per 10 to 20 champions.
Select by trusted-network: the people others go to with questions, the ones already helping colleagues informally without being asked. The loudest advocates are usually not the most effective ones.
Champions need explicit time allocation, structurally, not as an add-on to their normal role. Allocate 20 to 30 percent of each champion’s time to champion activities for the first three months, scaling back to 10 percent as the programme matures. Without the time allocation the programme exists on paper and produces nothing.
Training spend in the second engagement should be roughly 10x what the first allocated. The Prosci change-management framework lays out the training stages: awareness training that explains why the change is happening, knowledge training that teaches staff how to use the tools, ability training delivered through hands-on practice during pilots, and reinforcement training that supports sustaining change over time. The 10x figure sounds large until set against the cost of a rollout that does not land. The Prosci data on integrated change management is direct: excellent change management correlates with 80 percent project objective achievement, against 14 percent for poor change management. The training spend is the dominant lever.
How should scope posture change?
Smaller pilot, harder kill criteria, defined sunset. The first engagement may have piloted broadly, creating a situation where the pilot became pseudo-production and the transition to full rollout was unclear. The second engagement should define a specific pilot for a single, well-defined business problem, involving a limited number of users (typically 50 to 100, or one team), with a defined timeline of 4 to 8 weeks ending in a go/no-go decision.
Kill criteria should be hard, not aspirational. “The pilot should achieve adoption of 50 percent or more” is a sentence, not a measurement. “At least 70 percent of pilot participants use the tool at least three times per week for 30 days” is testable. “The tool reduces time to complete the identified process by at least 20 percent, measured by comparing same-process steps for pilot participants versus non-participants” is testable. Define the criteria before the pilot begins and commit to enforcing them regardless of how much money has been spent or how close to success the pilot feels.
A sunset date matters as much as kill criteria. If the pilot meets criteria, when does the firm move to full rollout. If the pilot misses criteria, when does the firm stop and reassess. Without a sunset date, pilots get extended indefinitely because they are “almost there”. The discipline of a defined date prevents that.
How do these shifts work together?
The four postures reinforce each other. Milestone-gated budget gives the steering committee real decision points. Real governance gives the champions network political cover. The champions network creates the adoption that turns the pilot into a meaningful test. The smaller pilot generates the early data that informs the next milestone funding decision.
In failed first engagements, none of the four were in place. The budget was calendar-distributed. Governance was a project manager and a status report. Staff investment was a vendor demo and a slide deck. Scope was the comprehensive transformation. Each gap compounded the others.
In second engagements that succeed, all four are in place from day one. The structure is boring, recognisable, and grounded in the change-management literature. It is exactly the structure that most first engagements skipped because it did not feel necessary at the time.
The next post in the cluster covers what the data actually says about second-engagement success, including the project-size effect from the CHAOS Report and McKinsey’s research on stalled-transformation recovery. The diagnostic audit typically informs which of the four posture shifts deserves the most weight.
If you would like to apply this structure to a second engagement you are scoping, book a conversation.



