A B2B services firm I was talking to last month had a £45,000 quote on the table for a custom lead-scoring model. Eighteen months of HubSpot history, a data science consultancy, three months of build, a few thousand a year on retraining. The sales director was about to sign. Then his HubSpot account manager mentioned that predictive lead scoring was already in the platform he was paying £1,800 a month for. Toggle, not project.
That is the moment AutoML stops being abstract for many owners. The question is not “do we want machine learning?”, it is “what am I actually paying £45,000 for, and do I already have a version of it?”.
What is AutoML?
AutoML, automated machine learning, is software that automates the technical engineering of building a prediction model. It handles data cleaning, feature creation, algorithm selection, hyperparameter tuning, ensembling, and validation, the seven steps that historically consumed most of a data scientist’s time on a project. The base ingredient is your historical data. The output is a working model and a leaderboard of how well it performs.
What it does not do is the work either side of that pipeline. AutoML does not define the problem, clean up the underlying business data, decide what counts as a good outcome, or judge whether a prediction should drive automated action. The human role moves from “build the model” to “specify the problem and decide what to do with the answer”. Three to six months of specialist work compresses to roughly two weeks once the data is ready.
Why it matters for your business
Three things change when AutoML is on the table. Build cost drops from a £50,000 to £100,000 specialist hire to platform fees of £500 to £5,000 a month. Timelines drop from months to weeks, so a problem worth solving this quarter actually is. And the entry point shifts: the question for many SMEs is no longer “do we hire a data scientist?”, it is “do we already have AutoML in a tool we pay for?”.
The 2026 vendor landscape splits three ways. Standalone platforms include DataRobot at £5,000 to £15,000 a month for enterprise-grade work, H2O Driverless AI with strong explainability, and SME-focused no-code tools like Akkio and Obviously AI at £500 to £2,000 a month. Cloud services from AWS, Google, and Azure run on consumption pricing, typically £300 to £2,000 a month at SME scale. And then there is AutoML embedded inside tools you already use: HubSpot predictive lead scoring, Salesforce Einstein Discovery, Microsoft Power BI AutoML. The embedded layer is where many SMEs first meet the technique without realising it has a name.
Where you will meet it
Five SME use cases pay back inside six months. Churn prediction on twelve months of customer history, typically 75 to 85 percent accuracy and three to ten times the lift of a naive rule. Lead scoring with a 20 to 40 percent improvement in sales efficiency. Demand forecasting at ten to twenty percent lower error than ARIMA. Invoice classification at ninety percent plus accuracy. Customer lifetime value for marketing allocation.
You will also meet AutoML in procurement conversations you do not realise you are having. A consultancy quoting £45,000 for a custom churn model is competing with the predictive feature already inside your CRM. A data warehouse vendor pitching a bespoke build is competing with the AutoML service in the cloud platform that warehouse already runs on. The right first question is “what already exists in the stack?”, not “what should we build?”. UK SMEs typically discover three or four existing AutoML capabilities embedded in tools they already pay for once they go looking. The annual savings on otherwise-quoted bespoke builds frequently sit between £30,000 and £80,000 per year for a 30 to 70 staff service business, before you have signed for anything new.
When to ask, when to ignore
Ask about AutoML when you have a well-specified prediction problem, classification or regression, on structured tabular data, with at least 1,000 labelled historical examples and a measurable business outcome. Churn, lead scoring, invoice routing, demand forecasting, customer lifetime value. Start with whatever is embedded in your existing platform. A bespoke specialist engagement only earns its keep when the problem is genuinely outside what AutoML can reach.
Ignore AutoML, or pick a different tool entirely, in four scenarios. Deep learning problems, image, speech, video, complex language understanding, need hand-designed neural architectures and specialist engineering. Generative AI work is a different discipline, prompt design, hallucination management, evaluation. Novel problems with no historical precedent give AutoML nothing to learn from, the technique finds patterns in past data and there are none. And causal questions, “did this campaign cause that revenue lift?”, need experimental design, not predictive modelling. AutoML will produce a confident answer to any of these. It will be wrong.
Related concepts
Machine learning is the parent technology AutoML automates the building of. AutoML is one mechanism for producing a machine learning model, alongside hand-engineered models built by specialists. The choice between them is mostly about volume, accuracy ceiling, and whether the problem fits an established pattern.
Supervised and unsupervised learning are the two technique families AutoML actually applies. Supervised methods learn from labelled historical examples, the typical AutoML use case. Unsupervised methods find structure in unlabelled data and are usually a separate workflow.
Fine-tuning is an alternative path for some problems involving large language models, retraining a foundation model on your examples rather than building a fresh predictor. AutoML and fine-tuning solve different problems. AutoML predicts; fine-tuning customises a generative model.
The no-code versus custom AI build decision is often where AutoML lands in practice. A £45,000 custom build competing with a £100 a month embedded feature is the question AutoML raises in many procurement conversations. The right answer is rarely the most expensive one.
Explainable AI is the governance layer that turns an AutoML deployment from a technical experiment into a decision your firm can defend. Under ICO automated decision-making guidance, the EU AI Act’s high-risk classification, and FCA model governance expectations, an AutoML model that affects customers needs documentation, independent validation, monitoring, and an explainability path. That overhead is real, perhaps twenty to thirty percent on top of the build, and it is now non-negotiable for any model that drives consequential decisions.
The honest test of an AutoML opportunity is two questions. Is this a well-specified prediction problem on structured historical data? And do I already have a version of this in a tool I pay for? For many SME prediction problems the answer to both is yes, and the right path is to start with what exists, not what a consultancy is quoting for.



