The managing director of a 90-person consulting firm has 12 hires to make this year and a recruiting bill that already runs to £75,600 in agency fees alone. Cost-per-hire including internal time sits around £6,125. Her HR coordinator spends six to eight hours a week on interview scheduling and another five hours fielding routine policy questions. An agency rep just demoed an AI tool that promises to halve time-to-hire. She has £6,000 of approved budget and a finance director asking whether the agency relationship still makes sense.
Her question is the right one. Not whether AI belongs in HR, but which slice of those people-ops costs to redirect first, what disclosure she now owes candidates under the ICO’s March 2026 paper, and where the line sits between a recruiter using AI and a recruiter rubber-stamping it. HR is the function with the densest UK regulatory hooks in the business and the sharpest split between deployable jobs and human-only ones. The useful framing is seven jobs to deploy in order, five to keep with the HR team, and a governance design that comes before the platform choice.
What jobs does AI do well in HR today?
Seven jobs have hit the maturity threshold at this size band. Resume screening cuts initial review 60 to 80 percent on a LinkedIn Recruiter, Recruitee, or Ashby seat, with SHRM data showing 89 percent of recruiters report efficiency gains. JD generation produces consistent layouts in minutes. Interview scheduling lifts 38 percent of recruiter time off the calendar through Paradox or Pin. The remaining four jobs sit on different bottlenecks again.
Salary benchmarking on Payscale Smart Price collapses hours of role classification into minutes. Onboarding automation shows 30 to 50 percent faster ramp, with Vonage’s Intelligent Workspace cutting new-agent ramp from three months to four weeks. HR policy chatbots on Wonderchat deflect 50 to 70 percent of routine inquiries with sub-five-minute setup. Sentiment analysis on Qualtrics, CultureAmp, or OfficeVibe surfaces engagement drift before turnover, and Gallup links measured engagement work to 21 to 51 percent lower turnover. The numbers stack quickly at SME scale because each job sits on a different bottleneck.
Where are UK SMEs actually using these tools?
The platform stack at this size band has settled into a recognisable shape. HiBob (London-founded, OpenAI partnership, £6 to £10 per employee a month) anchors the UK end and publishes an AI Policy Template usable as a governance frame. Personio (£40 to £60 per employee a month) bakes UK GDPR, Equality Act, and EU AI Act compliance into core architecture. Factorial starts at £5.40 per employee, BambooHR at £5 to £8 for one-person teams.
Around that core, the specialists map cleanly to jobs. Recruitee (£150 to £300 per posting) and Workable (£99 a month) are the cheap ATS entry points; LinkedIn Recruiter (£450 to £550 per recruiter a month) suits firms already on the platform. Paradox and Pin own scheduling at £1,500 to £5,000 a month. Wonderchat handles the policy chatbot at £200 to £400. StaffNow runs a UK-founded hybrid AI-plus-human screening model at £500 to £1,500 per placement and is explicit that AI alone falters at empathy, creativity, and cultural fit. Workday and Eightfold sit at the enterprise ceiling and are useful here mainly as litigation reference points.
Where does AI still fall short in HR?
Five jobs remain genuinely human work. Final hiring decisions sit under UK GDPR Article 22; the ICO’s March 2026 paper found many employers running effectively automated decisions because recruiters followed AI rankings without genuine override. Redundancy is non-automatable; the digital-only HR1 form mandatory from 1 December 2025 requires granular human reasoning per case. Grievance handling is harder than it was, partly because employees increasingly use ChatGPT to draft submissions citing fabricated case law.
Neurodiversity adjustments under the Equality Act 2010 require neurodivergent employees in the design loop, not just the deployment loop. Cultural-fit assessment amplifies hiring bias and should be replaced with a cultural-add framework with measurable behavioural criteria. The sixth boundary, and the quiet failure mode the brochure will not mention, is bias mirroring. University of Washington research found humans mirror AI rankings 66 percent of the time at moderate bias and 90 percent at severe bias. ChatGPT systematically rates resumes by perceived race and gender and downgrades disability-implying credentials. Mobley v Workday, allowed to proceed in April 2026, alleges that employment gaps and medical leave functioned as disparate-impact proxies. The fix is a documented bias audit before deployment, not vendor self-attestation.
What does a 90-day starter rollout look like?
Three phases. Weeks 1 to 4 are diagnostic and pilot: baseline current time-to-hire (typically 40 to 50 days) and cost-per-hire (typically £6,125), pick the highest-friction recruiting use case, and run a single-tool pilot at £400 to £450 a month on Recruitee or LinkedIn Recruiter. Communicate AI use to candidates in writing per ICO expectations. The ICO does not accept “we use technology to screen applications” as sufficient disclosure.
Weeks 5 to 8 add the second use case (scheduling or a Wonderchat policy chatbot at £200 a month) and stand up the AI governance frame. That covers data processing agreements with each vendor, documented bias audits before deployment, transparency to candidates, human override authority, and four to eight hours of HR training per user. HiBob’s published AI Policy Template is a usable starting point. Weeks 9 to 12 introduce sentiment analysis on CultureAmp or Qualtrics, or skills-gap analysis on Personio. Total 90-day spend lands at £3,050 to £5,450 including HR coordinator setup time at a £40 blended rate. Year 1 projected benefit: £21,000 to £29,000 from time-to-hire compression, agency-fee reduction, and policy-question deflection. Payback inside three months. The prior post on where to apply AI first covers how to land on HR rather than another function.
What should you ask an HR AI vendor before signing?
Six procurement questions separate a credible vendor from a pitch. Show me your bias-testing methodology and run a bias audit on our candidate population before deployment; vendor self-attestation is not enough, and disparate-impact testing across protected characteristics needs documenting. How does the platform meet the ICO’s meaningful-human-oversight standard, and what audit trail does it produce showing recruiters genuinely overrode AI shortlists? Where is the data processed, and is the DPA UK GDPR-compliant including sub-processor obligations?
What proxy variables does the scoring model use, and can they be audited and removed if disparate impact surfaces? Mobley v Workday is the boundary marker; employment gaps, medical leave, education tier, and postcode are the variables that have drawn litigation. Fifth, does the platform meet EU AI Act high-risk obligations from 2 August 2026? Even UK-only firms benefit from buying against this standard. Sixth, what is your rework rate when AI outputs miss? Workday research found nearly 40 percent of AI time savings get offset by rework when outputs are inaccurate, and a vendor unwilling to share their actual figure is telling you something. Pair these six with the prior post on hiring an operational integrator before signing anything, because AI in HR magnifies whatever hiring discipline the firm already runs.
If you would like a second pair of eyes on which two jobs to start with, and on whether the governance design holds up against the ICO’s March 2026 expectations, book a conversation.



