A managing director I spoke with recently runs a 50-staff manufacturing services firm in the Midlands. Around 15 percent of her contracts do not renew each year. In the last quarter she has been pitched four AI vendors. Three of them used “AI”, “machine learning”, and “generative AI” interchangeably in the same proposal. One wants £80,000 to build a custom ML model. Another sells a £400 a month CRM add-on with embedded ML lead scoring. A third pitches a large language model to “read every customer interaction and find the at-risk ones”.
Her real question, the one she has to answer to spend the budget honestly, is “given my churn problem and the data I actually have, which of these three pitches is solving the same problem, and is machine learning even the right shape for what I need?” That is a procurement question, not a technology question. Answering it requires knowing where machine learning fits and where it does not.
What is machine learning?
Machine learning is a subset of artificial intelligence focused on algorithms that learn patterns from your historical data, then apply those patterns to score, predict, or classify new data. The system finds the rules from examples; you provide the examples and the outcomes. That distinguishes it from rule-based automation, where a person writes the rules, and from generative AI, where the system creates new content rather than scoring existing items.
The plain-English version is shorter. If you wanted to predict which customers were likely to leave in the next six months, you could try writing if-then rules: late payments flag risk, low support engagement flags risk. The real pattern in your customer base is usually more subtle, a combination of declining order frequency, falling transaction value, and reduced engagement that does not become obvious until you analyse historical churners. Machine learning finds that combination automatically and ranks each live customer by probability of leaving.
Machine learning also divides into three types worth knowing by name. Supervised learning trains on data where you already have the correct answer for each historical record, churned or stayed, fraudulent or legitimate. Unsupervised learning finds natural groupings in unlabelled data, useful for customer segmentation. Reinforcement learning, where an agent learns by trial and reward, is rare in SME contexts. For a UK SME in 2026, supervised learning is where almost all the practical value lives.
Why does it matter for your business?
Machine learning matters because, by mid-2025, around 35 to 39 percent of UK SMEs were already running ML-powered tools, up from 25 percent the year before, and another 31 percent were actively considering it. That puts total market engagement near 70 percent. The competitive question has shifted to which problem in your business is genuinely a prediction problem worth automating, and which is being mis-sold as one.
Five SME use cases pay back inside 12 months. Customer churn prediction reaches Random Forest and XGBoost ROC AUC scores of 0.98 on clean data, with payback typically in six to nine months because preventing one or two contract losses covers the model cost. Lead scoring lifts forecast accuracy by around 28 percent in published studies. Invoice classification runs at 90 to 99 percent accuracy and cuts processing time from 10 to 14 minutes per invoice down to one or two. McKinsey reports ML demand forecasting can eliminate up to 50 percent of forecasting errors. Fraud detection at 87 percent accuracy beats the 65 percent baseline of rule-only systems.
What it costs is the part vendors blur. A realistic first project for a UK SME lands between £25,000 and £100,000 all-in, depending on whether you go no-code (£500 to £5,000 a year), embedded in tools you already use, or AutoML platform (DataRobot at a £215,200 median annual spend). Hidden costs, data remediation, retraining, integration, governance, training, consume 40 to 60 percent of stated budgets. That is why the headline ROI cases vendors quote rarely land in practice.
Where will you actually meet it?
You meet machine learning in three places. The first is where the bulk of UK SME adoption is happening: embedded inside business software you already pay for. HubSpot’s Breeze AI ships with ML lead scoring in Professional plans at around £20,400 a year for 25 users. Salesforce Einstein and Agentforce do the same at around £49,500. Xero, FreshBooks, and SAP S/4HANA include ML in invoice routing, anomaly flagging, and forecasting.
Second, in dedicated AutoML platforms aimed at firms that want custom models without a full data science team. DataRobot, Google Cloud AutoML, H2O, and Azure Machine Learning automate algorithm selection, feature engineering, and tuning. The cost band is £100,000 to £400,000 a year for SME-scale deployments. The data preparation work does not go away.
Third, in no-code platforms like Akkio and Obviously AI that let a non-technical operator build predictive models from spreadsheets, typically £500 to £5,000 a year. These are the right entry point for a first project, particularly if you are testing whether ML adds value to a specific workflow before committing to a larger spend. Where you start depends on whether you have technical capability in-house, the volume of your problem, and how customisable the off-the-shelf option needs to be.
When to ask about it, and when to ignore the pitch
Ask about machine learning when you have a prediction, classification, or scoring problem, when historical examples of the outcome exist in your data, and when the underlying logic is too subtle to write as rules. Customer churn, lead qualification, fraud detection, invoice classification, and demand forecasting all fit. The signal: you can describe the outcome, you have hundreds of past examples, and a human reviewer struggles to say why one case differs from another.
Ignore the ML pitch when the problem is actually a content-generation problem (use generative AI), when the rules are well understood and stable (rule-based automation is faster, cheaper, and fully explainable), or when you do not have historical data for the outcome you want to predict. A vendor pitching ML to “automatically draft your client emails” is selling you generative AI in ML clothing. A vendor pitching ML to “route invoices from your 12 known suppliers to the right cost centre” is selling you an expensive answer to a rules problem.
The procurement question to force in any meeting is simple. Is this rule-based, machine learning, or generative AI? What does the model train on, mine or yours? What is the accuracy on a holdout sample of my data, not the demo data? What is the realistic budget for data preparation? Vendors who answer those four questions cleanly are worth a follow-up.
Related concepts worth knowing
Machine learning sits inside a small constellation of terms owners hear constantly and rarely have time to disentangle. AI is the broader discipline; machine learning is one approach to it, with rule-based symbolic reasoning as another. Generative AI is a different ML subset, designed to create rather than predict. Deep learning is ML using multi-layer neural networks, the lineage of foundation models like GPT-4 and Claude.
Two more terms worth recognising. Transfer learning lets a pre-trained model adapt to your specific problem with hundreds or thousands of examples instead of the millions the original training run required, which is what makes ML feasible for an SME with limited historical data. Fine-tuning is supervised learning applied to a pre-trained model, the route by which a foundation model gets adapted to your industry vocabulary without retraining from scratch.
The skills gap determines whether any of this delivers ROI. By 2026, 82 percent of executives report their AI training is sufficient; only 48 percent of employees agree. That 34-point disconnect is the single best predictor of ROI failure. The technology is available and affordable. The bottleneck is whether the people using it have role-specific training mapped to their daily workflow. If you are scoping an ML project, budget for workforce adaptation alongside the licence, or expect the model output to sit unused.
If you would value a candid conversation about which of your problems is genuinely a machine learning problem and which is being mis-sold, book a conversation.



