The MD of a thirty-staff UK recruitment consultancy is sitting in front of a vendor demo for a new CV screening tool. The deck calls it “powered by deep neural networks”, which sounds reassuring until she realises she has no idea what that actually means. She has read enough about ICO guidance on automated decision-making to know that bias in CV screening is a regulatory issue under UK GDPR and the Data (Use and Access) Act 2025. She has no intention of training a neural network herself. She just needs the mental model to evaluate the vendor honestly, document her own AI governance, and avoid signing for something that ends up on a tribunal desk in eighteen months.
That is the right place to be, and this post is for owners in that seat.
What is a neural network?
A neural network is a layered mathematical structure of interconnected nodes that learns to recognise patterns in data by adjusting numerical weights during training. Each node takes multiple numerical inputs, applies learned weightings, combines them, and passes the result through an activation function that decides whether the node fires a signal to the next layer. Power comes from the layered arrangement, not the individual node.
Drop the brain metaphor. It does more harm than good. The “neural” in neural network is historical inspiration, the actual mechanism is statistical pattern-matching, not biological cognition. A trained network has learned to associate specific numerical patterns in its inputs with specific outputs, based on the historical examples it was trained on. It does not understand anything. It does not reason. It will fail when shown patterns substantially different from its training data, because there is no first-principles thinking underneath to fall back on.
There are three layer types. The input layer receives raw data, transaction features, pixel values, words turned into numbers. The hidden layers do the work, combining inputs, applying weights and biases, and passing results through activation functions. The output layer produces the prediction. Training does not change the structure, it changes the weights inside the nodes. A trained network with a million weights is a million learned numerical parameters encoding statistical associations.
Why does it matter for your business?
It matters because by 2026 almost every “AI-powered” feature in mainstream business software is a neural network somewhere underneath. The decision is no longer whether your business will use them, it is which vendor’s neural networks you consume and on what terms. That reframes the question from a technical one to a procurement and governance one, which is squarely in the owner’s job description.
The everyday list is concrete. CRM lead scoring in Salesforce Einstein and HubSpot Breeze AI uses neural networks. Email spam filtering in Microsoft and Google uses neural networks. Voice transcription in Whisper, Deepgram, and AssemblyAI runs on deep neural networks trained on hundreds of thousands of hours of speech. Accounting software invoice matching in Xero and FreshBooks. Bank fraud detection. CV screening. Document OCR. Every modern chatbot or AI agent.
The other reason it matters is regulatory. The ICO’s January 2026 agentic AI guidance, summarised in Skadden Arps’ UK regulator briefing, makes clear that automated decision-making systems based on neural networks fall squarely within UK data protection law. Keystone Law’s analysis of the ICO posture on AI recruitment is direct, employers using neural-network-based screening must be able to explain decisions and demonstrate fairness. If you cannot, the regulator’s interest is no longer hypothetical.
Where will you actually meet it?
You will meet it in four broad architectures, each suited to a different data shape. Convolutional neural networks for images and video, underneath manufacturing quality inspection, OCR, and retail inventory counting. Transformers for language, the architecture from 2017 that powers every modern LLM. Recurrent networks and LSTMs for older sequential and time-series work. Standard feed-forward networks for some tabular problems.
For tabular SME data, neural networks are not always the right tool. Aidan Cooper’s analysis of tree-based models versus deep learning makes the case that gradient boosted trees often outperform neural networks on rows-and-columns problems, and they are easier to interpret. Day to day, the concrete places you encounter neural networks are predictive lead scoring telling your sales team which prospects to call first. Churn prediction flagging customers about to leave. Voice-to-text running quietly under your video conferencing transcripts at 95%+ accuracy on UK English. Invoice matching auto-reconciling purchase orders with bank statements. Recommendation engines suggesting products on your e-commerce site. Document classification routing inbound email to the right department.
The pattern is consistent. The vendor has done the heavy training, often on a foundation model as described in AWS’s foundation models reference. You consume the result, sometimes with a fine-tuning option to adapt it to your data, as covered in IBM Think’s transfer learning explainer.
When to ask vs when to ignore
Ask hard questions when the neural network is making decisions about people, money, or safety. Hiring screens, credit and lending decisions, insurance pricing, fraud flags that block customer transactions, quality control on a regulated product. In any of these cases the regulatory frame, whether UK GDPR, the EU AI Act high-risk classification, or sector-specific rules, demands explainability and fairness controls.
IBM Think’s black-box AI overview lays out the techniques honestly, attention visualisation, activation visualisation, LIME, SHAP. They give partial explanations, not absolute ones, and a vendor who claims full transparency is overselling. Ask for the partial explanations anyway, they are still useful. Ignore the architecture question when the use case is low-stakes and the data shape genuinely fits. Spam filtering, voice transcription, OCR, internal document search. These are commodity neural-network applications where the vendor’s pre-trained model is mature, the failure mode is recoverable, and your time is better spent on the dozen other decisions in front of you. Trust the maturity of the category and move on.
The harder calls are in the middle. A small clean tabular dataset, a thousand rows, a clear target variable. A neural network is probably the wrong tool. Gradient boosted trees often beat them on this kind of data and are far easier to interpret. A deterministic rule covers it cleanly. Use the simpler tool. Resist the vendor’s framing that neural networks are universally superior, because they are not.
Related concepts
A neural network sits inside a small family of overlapping ideas worth knowing by name. Machine learning is the parent category, the broad practice of systems that learn from data, of which neural networks are one approach among several. Deep learning is the subset of machine learning using neural networks with many hidden layers, which is what made image recognition and language understanding viable in the 2010s.
A foundation model is a large pre-trained neural network, often a transformer, that vendors build products on top of. An embedding is the numerical vector representation that neural networks operate on, the way text or images get turned into numbers the network can process. An LLM is a transformer-based neural network trained on language at very large scale, the architecture underneath ChatGPT, Claude, and Gemini. Explainable AI is the response to the opacity of neural networks, a set of techniques and tools, LIME, SHAP, attention visualisation, that vendors increasingly bundle to support regulatory and governance work.
If you remember one thing from this post, make it this. A neural network is a layered statistical pattern-matcher, not a brain. The architecture is interesting, the procurement questions are what move money. Ask the vendor about training data, fine-tuning options, explainability tooling, fairness audit, and drift monitoring. If you want to talk it through against your current vendor stack, book a conversation.



