A 22-staff marketing agency I worked with had built a custom reporting integration with their client analytics platform. The contractor finished the work, deleted the local code from his laptop, and moved on to the next gig. Six weeks later, the analytics vendor sent the agency a £4,200 bill. An unknown actor had been running 50,000 API calls a day against the analytics endpoint for over a month. When the agency dug in, the cause was straightforward. The API key was committed inside a public-by-mistake GitHub repository the contractor had used for a different client. A bot had scanned the repo, lifted the key, and started running queries within hours of the commit.
The vendor revoked the key and partly waived the charges. The owner spent the next afternoon working out who else might have a copy of which key. Her question to me was the right one. It was never “is our API key safe”, it was “who has been responsible for that key, and how would we have known if it walked out the door?”
What is an API key?
An API key is a long secret string, usually 32 to 256 characters, that authenticates your software against a vendor’s API. The vendor sees the key on an incoming request and treats it as authorised. There is no second factor, no password reset, no challenge question. Anyone who holds the key has the access. That simplicity is fine when the key stays inside your trusted boundary, it is a structural risk when the key leaks.
The mental model is closer to a physical key than to a password. A password belongs to a person and is paired with an authenticator code. An API key belongs to an application, gets used silently behind the scenes, and is rarely seen by humans after it has been generated. The same key tends to live in a config file, a deployment pipeline, a contractor’s laptop, and sometimes a Slack thread. Each of those locations is a copy.
Why does it matter for your business?
It matters because a leaked key gives the attacker exactly what your software has, immediately, with no friction. If the key is scoped to read customer data, they read customer data. If it is scoped to issue refunds, they issue refunds. If it has admin scope, they have everything your application has. There is no middle layer asking for a one-time code and no ICO-friendly audit prompt that says “this looks unusual, are you sure”.
For a UK SME with £1m to £10m turnover and 8 to 50 staff, three risk channels matter. There is the vendor invoice, where leaked OpenAI, AWS or Stripe keys routinely produce four and five-figure bills before detection, often for cryptocurrency mining. There is the customer data channel, where leaked CRM or analytics keys produce a notifiable incident under UK GDPR Article 32. And there is the operational channel, where revoking the key under pressure breaks the integration and pulls a day or two out of someone’s calendar to put it back together.
Where will you actually meet it?
You meet API keys in four common SME leak patterns, each well documented in 2024 to 2026 incident data. Keys committed to a public Git repository by accident, with bots scanning GitHub within minutes of the commit. Keys shared in email or Slack threads where they sit in archived messages for years. Keys left on a contractor’s laptop after the contract ends. And keys granted admin scope when read-only would have done the job.
The numbers behind the pattern are large. GitGuardian’s 2024 State of Secrets Sprawl report found 23.8 million secrets exposed across public GitHub commits in a single year, with API keys among the leading categories. The Verizon 2025 Data Breach Investigations Report names credential abuse as the leading initial access vector across all reported breaches. Anthropic, OpenAI, AWS and Stripe have each published advisories about leaked-key abuse, and the financial pattern is consistent. Cumulative spend before detection averages £2,000 to £15,000 for SMEs and almost always involves either content scraping or mining workloads running against the vendor’s metered endpoint.
When to ask vs when to ignore
The decision rule is posture-based. If your business has any meaningful AI spend or any integration that touches customer data, key governance is not optional. A secrets manager such as 1Password, Doppler, AWS Secrets Manager or Azure Key Vault costs £10 to £30 per user per month. Set rotation at 90 days, scope to the minimum the integration needs, and a named owner per key. NCSC and Article 32 both expect that.
If you are a single-developer firm running a couple of low-stakes integrations with no personal data in scope, 1Password Business with quarterly rotation and a short list of who holds what is enough. The threshold to take seriously is the moment the firm has eight or more staff, any AI subscription with API access, or any integration that reaches into a CRM, accounting system or analytics platform. At that point, the cost of a secrets manager is small, the cost of an incident is not.
There is also a procurement question worth asking every vendor. What does revocation actually do, how fast does it propagate across regions, and what is the incident-response timeline if a customer reports a key compromise. Vendors with mature key handling answer in seconds. Vendors who deflect are signalling that key compromise is not part of their security practice, which is procurement information either way.
Related concepts
An API is the interface itself, the set of endpoints a vendor exposes for your software to call. The API key is the credential your software presents on each call. The two are often spoken about interchangeably and they should not be. The interface is the road, the key is the licence to drive on it.
OAuth is the authorisation alternative for user-acting flows, where the integration acts on behalf of a person rather than the application itself. OAuth is more complex to set up but solves the “key on a contractor’s laptop” problem cleanly, because the access can be revoked per user without breaking the integration for everyone else. Many modern vendors support both, and the right choice depends on whether the integration is acting as the firm or acting as a specific user. A webhook signing secret is a related primitive in the reverse direction, where the vendor signs each event payload so your endpoint can verify it really came from them.
Prompt injection is the related risk inside AI applications specifically, where attacker-controlled text reaches a model and instructs it to act outside its intended scope. Where API key leakage gives the attacker your application’s privileges directly, prompt injection gives them your model’s privileges through indirection. Both come back to the same governance question, what scope did you grant, and to whom.
The point of all of this is to make the invisible visible. Keys are quiet. They sit in config files, get pasted into Slack, get copied to a developer machine, get forgotten. The owners who get burned are not the ones who failed at cryptography, they are the ones who never asked who held what, when it last rotated, and what would happen if it walked out the door. If you want to talk through where your own key inventory sits and what to ask your vendors next, book a conversation.



