The 12-person legal practice piloted Spellbook on standard service agreements and cut draft turnaround from three hours to forty-five minutes per agreement. The same practice, in the same quarter, quietly rolled back ChatGPT for client-facing drafts after a partner spotted a fabricated indemnity clause two days before signature. Both happened within ninety days of each other. Both were "AI for contracts." The difference between the win and the near-miss was where the AI sat in the workflow.
This is the contract-AI pattern most owners do not see until they have lived through both halves. The tools share a category. The risks do not. Treating extraction and drafting as one decision is the most expensive mistake an SME practice can make in this corner of AI deployment.
Why does AI work for extraction and fail for drafting?
Extraction is a structured task. The AI is matching patterns it has seen many times: payment terms, renewal dates, liability caps, termination conditions. Accuracy lands at 85 to 95 percent on the structured fields. Errors are catchable in review because the source document is right there to verify against. Drafting is unstructured generation. The AI produces fluent text that reads correct, sometimes invents definitions, sometimes misstates principles, and the verification cost is high.
White & Case documented a 50 percent reduction in contract review time using AI on complex due diligence. Luminance clients report 30 to 50 percent cycle-time reductions on extraction-heavy work. JPMorgan Chase has published a case study on 360,000 legal hours saved annually through AI contract analysis. These are extraction-led numbers, not drafting-led.
The drafting story looks different. Documented 2025 incidents include lawyers submitting AI-drafted briefs with fabricated case citations, resulting in sanctions, bar referrals, and financial penalties. The court holds the practitioner negligent regardless of whether AI was involved. The vendor is not the defendant. The firm is.
What does the integration trap look like?
Most contract AI deployments collapse at the integration layer, not the model layer. If a practitioner has to export a contract from Word, upload it to a separate platform, wait for processing, then copy results back into Word, the friction will exceed the time saved within four to six weeks. Tools that survive deployment are workflow-native: Spellbook lives inside Word, Ironclad and Robin AI sit in Salesforce, Luminance integrates with the firm's contract repository.
The cost of this is invisible until the audit. A team running a non-integrated tool will use it for the first ten contracts, abandon it for the next thirty, and the owner will not see the abandonment in a dashboard. They will see it when a partner asks why no contract has been put through Tool X for the last quarter.
Workflow integration matters more than feature breadth at SME scale. A tool with eighty percent of the features and ninety percent of the workflow fit will outperform a more comprehensive tool with sixty percent fit, every time, in a 5 to 20 person practice.
Which tools fit a 5 to 20 person practice?
Spellbook is the sweet spot for drafting copilot at £30 to £100 per user per month, integrated with Word and aimed at standard contract types. Luminance and LegalOn cover review and analysis at £100 to £500 per month. Robin AI combines AI review with managed legal services from £500 per month. Harvey and Ironclad enterprise platforms start at £500 to £2,000 per user per month and rarely make sense under twenty fee-earners.
For accountancy firms and consulting practices that need occasional contract review, ChatGPT Enterprise with disciplined review protocols or point-use of Spellbook is adequate. The tier above (Luminance, Robin AI) becomes economical at 100-plus contracts per year, where the time savings cover the platform fee within a quarter.
The decision is rarely about which tool is best in isolation. It is about which tool fits the workflow already running, and which one the practitioners will actually open three times a day instead of twice a month.
What protocol holds up under regulator scrutiny?
Every contract reviewed or drafted with AI assistance gets a senior-practitioner review before it leaves the firm. The review is documented with a simple checklist: which tool was used, the date, the reviewer's name, what corrections were made to the AI output, jurisdiction-specific confirmation, final sign-off. Five lines on a template. Stored against the matter file. Available if the SRA, the FCA, or a client raises a question.
The protocol exists because the regulators have not issued specific AI guidance, but the professional standards that govern the work remain unchanged. The SRA Code of Conduct requires competence and acting in the client's best interests. The FCA Senior Managers regime requires senior management to understand the tools the firm uses. Both apply whether or not AI is in the loop.
The Law Society of Ireland's GenAI guidance lists document review and gap analysis as unsuitable for unapproved AI without verification. Creating checklists and summarising materials are listed as acceptable lower-risk uses. The framing applies to UK practice as a working standard, even though the regulator has not codified it.
What is the realistic ROI for a smaller practice?
For high-volume contract environments (legal practices processing 100-plus contracts a year, in-house teams managing large vendor portfolios), AI delivers 30 to 50 percent time savings on review and 40 to 60 percent on drafting. At loaded legal rates of £150 to £300 per hour, that translates to £10,000 to £30,000 in recovered time for a 10-person practice and £50,000-plus for a 30-person firm.
For a 12-person practice piloting Spellbook on service agreements, the case study figures land at three hours to forty-five minutes per agreement, plus a 15-minute senior review. Net 90-minute turnaround. Six hours per week firm-wide of recovered time. Implementation runs around six weeks and £1,500 in software and training. Payback inside two months. Lower-volume practices (5 to 15 contracts a month) see £600 to £1,200 a week of recovered time, with three to six week payback on £1,000 to £3,000 a year of tooling.
The hidden ROI is risk reduction. Practices using AI to systematically review contracts catch more errors and missed provisions than practices relying on time-pressured manual review. No published study quantifies it, but the logic is sound. Systematic review catches more issues than rushed review.
What is the misconception worth correcting first?
The most expensive belief an owner can hold is that AI contract review is solved. It is not. The available tools are reliably good at extracting structured information from existing contracts and reliably bad at producing nuanced legal analysis or jurisdiction-specific drafting that is ready for client delivery. Treating the tools as solved is what generates the sanction cases.
The second-most expensive belief is that consumer ChatGPT is fit for client-facing contract work. It is professionally risky. Free or consumer-grade AI has no audit trail, no legal-specific training, no data residency controls, and no liability if the output is wrong. For client-facing contract work, use legal-specific tools with audit trails and liability frameworks. For internal idea generation, consumer AI is acceptable with disciplined review.
If you are working out where contract AI fits in a 5 to 20 person practice and how to keep the regulator side clean, the protocol is the part that matters most and the part most vendors will not write down for you. Book a conversation.



