What is AutoML? Why it matters for your business

Two colleagues at an office table looking at a laptop dashboard with a printed quote beside them
TL;DR

AutoML is software that automates the technical engineering of machine learning, feature creation, algorithm selection, hyperparameter tuning, validation, so a non-specialist can build a working prediction model in weeks rather than months. For UK SMEs in 2026 it usually arrives embedded in tools you already pay for, HubSpot, Salesforce, or Power BI, before it ever needs a standalone platform.

Key takeaways

- AutoML automates the model-build pipeline, not the upstream data work or the decision about whether to act on the prediction. - It fits a narrow but valuable class of problem: classification or regression on structured tabular data with historical examples. - It is genuinely wrong for deep learning, generative AI, novel problems with no history, and causal questions. - Many SMEs already have AutoML inside HubSpot, Salesforce Einstein, or Microsoft Power BI without realising the technique has a name. - Under ICO and FCA expectations, any AutoML model that affects customers needs documentation, independent validation, and an explainability path.

A B2B services firm I was talking to last month had a £45,000 quote on the table for a custom lead-scoring model. Eighteen months of HubSpot history, a data science consultancy, three months of build, a few thousand a year on retraining. The sales director was about to sign. Then his HubSpot account manager mentioned that predictive lead scoring was already in the platform he was paying £1,800 a month for. Toggle, not project.

That is the moment AutoML stops being abstract for many owners. The question is not “do we want machine learning?”, it is “what am I actually paying £45,000 for, and do I already have a version of it?”.

What is AutoML?

AutoML, automated machine learning, is software that automates the technical engineering of building a prediction model. It handles data cleaning, feature creation, algorithm selection, hyperparameter tuning, ensembling, and validation, the seven steps that historically consumed most of a data scientist’s time on a project. The base ingredient is your historical data. The output is a working model and a leaderboard of how well it performs.

What it does not do is the work either side of that pipeline. AutoML does not define the problem, clean up the underlying business data, decide what counts as a good outcome, or judge whether a prediction should drive automated action. The human role moves from “build the model” to “specify the problem and decide what to do with the answer”. Three to six months of specialist work compresses to roughly two weeks once the data is ready.

Why it matters for your business

Three things change when AutoML is on the table. Build cost drops from a £50,000 to £100,000 specialist hire to platform fees of £500 to £5,000 a month. Timelines drop from months to weeks, so a problem worth solving this quarter actually is. And the entry point shifts: the question for many SMEs is no longer “do we hire a data scientist?”, it is “do we already have AutoML in a tool we pay for?”.

The 2026 vendor landscape splits three ways. Standalone platforms include DataRobot at £5,000 to £15,000 a month for enterprise-grade work, H2O Driverless AI with strong explainability, and SME-focused no-code tools like Akkio and Obviously AI at £500 to £2,000 a month. Cloud services from AWS, Google, and Azure run on consumption pricing, typically £300 to £2,000 a month at SME scale. And then there is AutoML embedded inside tools you already use: HubSpot predictive lead scoring, Salesforce Einstein Discovery, Microsoft Power BI AutoML. The embedded layer is where many SMEs first meet the technique without realising it has a name.

Where you will meet it

Five SME use cases pay back inside six months. Churn prediction on twelve months of customer history, typically 75 to 85 percent accuracy and three to ten times the lift of a naive rule. Lead scoring with a 20 to 40 percent improvement in sales efficiency. Demand forecasting at ten to twenty percent lower error than ARIMA. Invoice classification at ninety percent plus accuracy. Customer lifetime value for marketing allocation.

You will also meet AutoML in procurement conversations you do not realise you are having. A consultancy quoting £45,000 for a custom churn model is competing with the predictive feature already inside your CRM. A data warehouse vendor pitching a bespoke build is competing with the AutoML service in the cloud platform that warehouse already runs on. The right first question is “what already exists in the stack?”, not “what should we build?”. UK SMEs typically discover three or four existing AutoML capabilities embedded in tools they already pay for once they go looking. The annual savings on otherwise-quoted bespoke builds frequently sit between £30,000 and £80,000 per year for a 30 to 70 staff service business, before you have signed for anything new.

When to ask, when to ignore

Ask about AutoML when you have a well-specified prediction problem, classification or regression, on structured tabular data, with at least 1,000 labelled historical examples and a measurable business outcome. Churn, lead scoring, invoice routing, demand forecasting, customer lifetime value. Start with whatever is embedded in your existing platform. A bespoke specialist engagement only earns its keep when the problem is genuinely outside what AutoML can reach.

Ignore AutoML, or pick a different tool entirely, in four scenarios. Deep learning problems, image, speech, video, complex language understanding, need hand-designed neural architectures and specialist engineering. Generative AI work is a different discipline, prompt design, hallucination management, evaluation. Novel problems with no historical precedent give AutoML nothing to learn from, the technique finds patterns in past data and there are none. And causal questions, “did this campaign cause that revenue lift?”, need experimental design, not predictive modelling. AutoML will produce a confident answer to any of these. It will be wrong.

Machine learning is the parent technology AutoML automates the building of. AutoML is one mechanism for producing a machine learning model, alongside hand-engineered models built by specialists. The choice between them is mostly about volume, accuracy ceiling, and whether the problem fits an established pattern.

Supervised and unsupervised learning are the two technique families AutoML actually applies. Supervised methods learn from labelled historical examples, the typical AutoML use case. Unsupervised methods find structure in unlabelled data and are usually a separate workflow.

Fine-tuning is an alternative path for some problems involving large language models, retraining a foundation model on your examples rather than building a fresh predictor. AutoML and fine-tuning solve different problems. AutoML predicts; fine-tuning customises a generative model.

The no-code versus custom AI build decision is often where AutoML lands in practice. A £45,000 custom build competing with a £100 a month embedded feature is the question AutoML raises in many procurement conversations. The right answer is rarely the most expensive one.

Explainable AI is the governance layer that turns an AutoML deployment from a technical experiment into a decision your firm can defend. Under ICO automated decision-making guidance, the EU AI Act’s high-risk classification, and FCA model governance expectations, an AutoML model that affects customers needs documentation, independent validation, monitoring, and an explainability path. That overhead is real, perhaps twenty to thirty percent on top of the build, and it is now non-negotiable for any model that drives consequential decisions.

The honest test of an AutoML opportunity is two questions. Is this a well-specified prediction problem on structured historical data? And do I already have a version of this in a tool I pay for? For many SME prediction problems the answer to both is yes, and the right path is to start with what exists, not what a consultancy is quoting for.

Sources

Google Cloud (2026). AutoML documentation and pricing. https://cloud.google.com/automl/docs Amazon Web Services (2026). SageMaker Autopilot product documentation. https://aws.amazon.com/sagemaker/autopilot/ Microsoft (2026). Azure Machine Learning AutoML pricing. https://azure.microsoft.com/en-gb/pricing/details/machine-learning/ Microsoft Learn (2026). Power BI AutoML capabilities. https://learn.microsoft.com/en-us/power-bi/ Information Commissioner's Office (2024). Guidance on automated decision-making and AI. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/ European Parliament (2024). EU Artificial Intelligence Act, official text. https://www.europarl.europa.eu/topics/en/article/20230601/EU-AI-Act Financial Conduct Authority (2024). Machine learning in UK financial services. https://www.fca.org.uk/publications/research/research-note-machine-learning-uk-financial-services Stanford HAI (2025). AI Index Report 2025. https://aiindex.stanford.edu/ McKinsey (2024). The state of AI in 2024. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2024 DataRobot (2026). Platform pricing and SME deployment guide. https://www.datarobot.com/pricing/

Frequently asked questions

Do I need a data scientist if I am using AutoML?

Not for the model build itself, but the role does not disappear. AutoML automates feature engineering, algorithm selection, and tuning. It does not define the problem, audit the data, or decide whether the prediction should drive an automated action. A capable analyst, an operations lead, or a finance manager who understands the business can carry that work. A specialist becomes necessary for novel problems, unstructured data at scale, or causal questions.

How much data do I need before AutoML is worth trying?

Roughly 1,000 labelled historical examples is the practical floor for classification problems like churn or lead conversion. Twelve to twenty-four months of monthly history is the floor for demand forecasting. Less than that and the model has no real pattern to find, and AutoML will return a confident-looking output that does not generalise. Quality matters more than volume, particularly that the historical outcomes are correctly labelled.

Will the ICO have a problem with an AutoML model that scores my customers?

It depends on what the score does. If the score significantly affects the customer, credit decisions, account closures, employment screening, then yes, the ICO expects a Data Protection Impact Assessment, an explainability path, and an audit trail. A lead-scoring model that informs which prospects a salesperson calls first is much lower risk. The test is whether the model's output materially changes the outcome for a person, not whether the technique is fancy.

This post is general information and education only, not legal, regulatory, financial, or other professional advice. Regulations evolve, fee benchmarks shift, and every situation is different, so please take qualified professional advice before acting on anything you read here. See the Terms of Use for the full position.

Ready to talk it through?

Book a free 30 minute conversation. No pitch, no pressure, just a useful chat about where AI fits in your business.

Book a conversation

Related reading

If any of this sounds familiar, let's talk.

The next step is a conversation. No pitch, no pressure. Just an honest discussion about where you are and whether I can help.

Book a conversation