AI is restructuring the consulting model. The 'obelisk' reshape and what it means for boutiques.

A managing partner at his desk in a boutique consulting office, reviewing a printed resourcing plan with handwritten margin notes, a laptop open beside him, a thoughtful expression
TL;DR

Harvard Business Review's 2025 framing of the new consulting structure as an 'obelisk' (fewer layers, smaller teams, three core roles: AI facilitator, engagement architect, client leader) names a structural change, not just a tool deployment. McKinsey's internal copilot Lilli is used by 70 percent of 45,000 employees, 17 queries per week. The harder question for a 28-person boutique is whether to restructure the firm's resourcing model around AI before clients ask why three analysts are needed when one engagement architect plus a copilot can deliver the same output.

Key takeaways

- McKinsey's 2025 Global Survey shows 88 percent of organisations regularly using AI in at least one business function. Only one third have begun to scale. The two thirds in the middle are running pilots without converting them into firm-wide capability. - McKinsey's internal copilot Lilli is the public benchmark for what scaling looks like at the top: 70 percent of 45,000 employees, 17 queries per week, RAG over 100 years of internal knowledge. - Harvard Business Review's "obelisk" framing names the new structure: fewer layers, smaller teams, three core roles (AI facilitator, engagement architect, client leader). The shift is structural, not just tooling. - Three use cases work at 10 to 50 person boutique scale today: AI-assisted proposal generation (1 to 2 week payback), AI-augmented research (1 to 4 week payback), internal resource optimisation (3 to 6 month payback). - The light-touch regulatory environment (no sectoral regulator equivalent to the SRA or FCA) creates freedom to experiment plus reputational risk. The four real compliance gates are client confidentiality, professional liability, IP ownership of AI-generated work, and labour law on staffing changes.

The managing partner of a 28-person strategy boutique is looking at next year’s resourcing plan. The firm has six analysts, four senior consultants, and three partners. The plan assumes hiring two more analysts in the spring. He has been told, casually, that two of his current analysts are using ChatGPT, Claude, and a custom GPT for 80 percent of their initial market research output.

The casual conversation is the moment the plan stopped working. The plan was written for a firm structure that makes sense if junior analysts are doing the research themselves. His firm’s junior analysts are mostly editing AI output. The next two hires, on that plan, are buying capacity the firm doesn’t need.

How much of the consulting market has actually moved?

McKinsey’s 2025 Global Survey on AI shows 88 percent of organisations regularly using AI in at least one business function, up from 78 percent a year ago. The survey covers all industries; the consulting figure tracks the cross-industry average closely. Most consulting firms have AI in at least one workflow now. Only one in three has begun to scale.

The pattern matters because consulting is one of the most knowledge-intensive sectors in the survey. AI’s first impact is on knowledge work: research, synthesis, drafting, analysis. Consulting firms are the first sector where AI affects the core deliverable, rather than only the back office. Other sectors get AI in administrative workflows; consulting gets AI in the work itself.

McKinsey’s internal copilot Lilli is the public benchmark. Lilli is used by over 70 percent of McKinsey’s 45,000 employees. Each user averages 17 queries per week. Lilli synthesises 100 years of internal knowledge through retrieval-augmented generation, materially cutting research and planning time across the firm. The largest consulting firm has restructured its internal research workflow around AI. Smaller firms competing for the same talent and clients face the question of whether to follow.

The McKinsey survey’s scaling figure is the real signal. 88 percent of firms have AI in some workflow, but only one third are scaling it. The two thirds in the middle are running pilots without converting them into firm-wide capability. That gap is where the real competitive risk for boutiques sits.

What is the HBR “obelisk” framing actually saying?

Harvard Business Review’s 2025 piece on AI changing the structure of consulting firms uses the term “obelisk” for the new model: fewer layers, smaller teams, three core roles (AI facilitator, engagement architect, client leader). The framing matters because it names a structural change, not just a tool deployment. The firm’s staffing model is what AI is reshaping.

The traditional consulting firm pyramid has many junior analysts, fewer senior consultants, and a small number of partners. The model assumes junior analysts do most of the research and modelling. Senior consultants synthesise that work into client-ready output. Partners cultivate client relationships and manage the firm.

The obelisk model collapses the bottom of the pyramid. AI does most of the initial research and modelling work. The firm needs an AI facilitator who is fluent in AI tools and can configure them effectively. The engagement architect leads projects and interprets AI output for client-fit. The client leader cultivates relationships and brings in work. Junior analyst work, in the traditional sense, is mostly absorbed.

The implication for boutiques is sharper than for large firms. A 28-person boutique on the traditional pyramid has 6 analysts, 4 seniors, 3 partners, plus support. On the obelisk model, the same revenue can be served with 1 to 2 AI facilitators, 4 engagement architects, 3 client leaders. That is a different firm, with different hiring, and different career paths for the people already in it.

Three use cases that work at boutique scale today

Three use cases produce measurable returns at 10 to 50 person consulting firms today. AI-assisted RFP and proposal generation (compresses bid prep from days to hours), AI-augmented research and analytics for client deliverables (mandatory citation verification), and internal resource and utilisation optimisation (a 5 to 10 percent improvement at this scale is £50,000 to £150,000 annual benefit).

AI-assisted proposal generation is the highest-leverage internal use case. A boutique responding to 10 RFPs per month, each saving 10 to 15 hours of work at £80 to £120 per hour for consultant time, recovers £8,000 to £18,000 monthly. Implementation cost is £3,000 to £10,000. Payback is 1 to 2 weeks. The reason this works at boutique scale is that proposal work is high-volume, structured (repeated bid templates), and benefits hugely from a curated internal knowledge base of prior case studies, methodologies, and team bios.

AI-augmented research is the second use case, and the one with the structural-change implication most boutiques underestimate. A 10 to 50 person firm with 5 to 10 full-time analysts saving 5 to 10 hours per analyst per week recovers 25 to 100 hours weekly. At £50 to £80 per hour, that is £65,000 to £400,000 annually. The work the analysts were doing on those hours is now being done by AI; the firm has to decide whether to redirect analyst time to higher-value work, or to reduce headcount on the next hiring round.

Internal resource and utilisation optimisation is the third use case. A 5 percent improvement in utilisation at this scale recovers 5 to 10 billable days per year per person, or £50,000 to £150,000 annual benefit. Implementation cost of £5,000 to £15,000 pays back in 3 to 6 months. The shape of the win differs from proposals or research. It is steadier and quieter, but it shows up in margin within the first quarter.

What does the structural threat to junior analysts actually look like?

The junior analyst role in the traditional consulting model exists to do work that AI is now better at. Initial market research, competitor analysis, financial modelling, deck-building from templates. The most-cited HBR finding is that AI is automating tasks “traditionally handled by junior consultants”. The impact on the up-or-out career track is what most boutique partners avoid talking about openly.

The honest version of the conversation goes like this. The two analysts already using AI for 80 percent of their initial output are doing it because it is faster and the work is good enough to ship after light editing. They are also, quietly, becoming engagement architects ahead of the firm’s structural readiness. They are doing the AI configuration work and the synthesis work, rather than the analyst work the firm hired them for.

The next two hires the boutique is planning for spring are buying capacity that the firm’s current analysts can already deliver with AI augmentation. Hiring them puts the firm on a trajectory the data does not support: more junior analysts on a model where junior analyst output is increasingly automated. A different move, investing in tooling, training, and career paths for the existing team, puts the firm on a trajectory toward the obelisk.

The real risk is that competitors who restructure around AI win the same bids at lower cost or higher margin, and the firm loses on one of those axes within the next 24 months.

What does the light-touch regulatory environment actually require?

Management consulting is one of the least regulated of the major professional service sectors. UK voluntary membership through the MCA. US voluntary association through AMCF or IMC. No mandatory regulator with sectoral AI guidance equivalent to the SRA, ICAEW, or FCA. That creates freedom to experiment, with reputational risk if AI-assisted work underperforms.

The four real compliance gates that do apply are client confidentiality, professional liability, IP ownership, and labour law. Client confidentiality means no client data into public AI tools without contractual safeguards. Professional liability means consultants must validate AI output before delivery, and indemnity insurance may not cover claims where the consultant relied blindly on AI. IP ownership means contracts must specify whether AI-generated insights are firm IP, client deliverable, or shared.

Labour law is the gate boutiques most often miss. A firm that uses AI agents to replace junior consultant roles is making a structural change to its workforce, not just a tooling change. Redundancy procedures in the UK, employment law in the US, and the firm’s own employment contracts all apply if AI deployment results in redundancies.

Disclosure is the fourth gate. A firm that markets AI-assisted delivery as if it were entirely human-led (or vice versa) creates reputational exposure if clients perceive a mismatch. The trend in the leading boutiques is to be explicit about AI augmentation in scopes of work and engagement letters, partly because clients increasingly ask, partly because the disclosure shifts the firm’s marketing from “we have humans” to “we have humans plus the right AI augmentation”.

What does the maths look like for a 28-person boutique?

AI-assisted proposal generation pays back in 1 to 2 weeks at typical bid volumes. AI-augmented research pays back in 1 to 4 weeks at typical analyst utilisation. Internal resource optimisation pays back in 3 to 6 months. The pure financial maths is overwhelming. The harder maths is the firm-structure decision that follows from those numbers.

A 28-person boutique deploying AI on proposals, research, and resource optimisation could plausibly reduce next year’s hiring plan by two analyst headcount and reinvest the saved cost in tooling, training, and a senior AI facilitator hire. The financial outcome is similar revenue at higher margin. The talent outcome is that the existing analysts get the chance to grow into engagement architects, with a clearer career path than the traditional pyramid offered.

The harder version of the maths involves the analysts already at the firm. Two of them are already operating partially in the new role. Four are not. The next 12 months are about giving the four a path into the new role, or accepting that some won’t make the transition. That is the conversation the partner with the resourcing plan on his desk has not yet had with his co-partners.

The caveats. The numbers above assume client demand stays elastic. If clients want the same deliverables faster at lower cost, the firm captures the benefit. If clients want higher-quality deliverables at the same cost, the firm captures the benefit differently (deeper analysis, more iterations, sharper synthesis). Both are workable. The firm that doesn’t make the transition tends to lose on price first and quality second.

What is the actual next move?

The next move is to deploy AI on the firm’s own proposal pipeline and internal resource optimisation first, and use the structural learning from those internal pilots to inform the bigger decision about hiring, training, and firm structure. Internal pilots produce honest data; client pilots produce political conversations.

For most 28-person boutiques, the proposal pilot looks like this. Pick a curated internal knowledge base (prior case studies, methodology documents, team bios) and configure a retrieval-augmented generation tool against it. Run the next 10 RFPs through the tool and compare bid prep time and win rate against the previous 10 RFPs. Measure the time saved and the quality of the output. Decide on broader rollout from real internal data.

The managing partner with the resourcing plan on his desk is at the right moment. The two analysts already using AI for 80 percent of their initial output are showing him the future of the firm in real time. The plan needs to change from “hire two more analysts in spring” to “invest in the obelisk model and retrain the existing six analysts as engagement architects over the next 12 months”. The hiring decision is the visible part. The firm-structure decision is the harder part underneath it.

If you would like to walk through this for your firm specifically, book a conversation.

Sources

  • McKinsey 2025 Global Survey on AI: 88 percent regular AI use across all industries; nearly half of $5bn-plus revenue firms in scaling phase vs 29 percent of under-$100m firms; only one third of all firms scaling. Source.
  • Harvard Business Review 2025 on AI changing the structure of consulting firms: AI automating tasks "traditionally handled by junior consultants"; the "obelisk" model with three core roles. Source.
  • SmartDev on AI in professional services: McKinsey's Lilli synthesises 100 years of internal knowledge through RAG; used by over 70 percent of 45,000 employees; averaging 17 queries per week. Source.
  • Stanford HAI (2024). The 2024 AI Index Report. Industry-by-industry adoption and performance benchmarks for AI use cases. Source.
  • Boston Consulting Group (2025). Are You Generating Value from AI, The Widening Gap. Cross-sector evidence that 5 per cent of future-built firms capture disproportionate value from AI. Source.
  • MIT CISR (Woerner, Sebastian, Weill and Kaganer, 2025). Grow Enterprise AI Maturity for Bottom-Line Impact. Stage-by-stage maturity benchmark applied across sectors. Source.
  • Brynjolfsson, E., Li, D. and Raymond, L. (2023). Generative AI at Work, NBER Working Paper 31161. Empirical productivity study showing 14 per cent average gain with 34 per cent for low-skilled workers. Source.

Frequently asked questions

What is the 'obelisk' model and why should I care about it?

Harvard Business Review's 2025 framing of the new consulting firm structure: fewer layers, smaller teams, three core roles (AI facilitator, engagement architect, client leader). The traditional consulting pyramid has many junior analysts, fewer senior consultants, a small number of partners. The obelisk collapses the bottom because AI now does most of the initial research and modelling work.

Should I deploy AI on client work or internal work first?

Internal work first. Deploy AI on the firm's own proposal pipeline and internal resource optimisation, and use the structural learning from those internal pilots to inform the bigger decision about hiring and firm structure. Internal pilots produce honest data; client pilots produce political conversations and uneven adoption.

What's the realistic ROI on AI-assisted proposal generation at boutique scale?

1 to 2 weeks. A boutique responding to 10 RFPs per month, each saving 10 to 15 hours at £80 to £120 per hour for consultant time, recovers £8,000 to £18,000 monthly. Implementation cost is £3,000 to £10,000. Proposal work is high-volume, structured, and benefits hugely from a curated internal knowledge base of prior case studies and methodology documents.

What does AI restructuring mean for my junior analyst team?

The junior analyst role in the traditional consulting model exists to do work that AI now does well. Initial research, competitor analysis, basic modelling. The two-track decision is whether to give existing analysts a path into the engagement architect role, or to accept that the next hiring round shifts toward AI facilitator and engagement architect roles directly. Both are workable. Doing neither is not.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation