A 28-staff financial advisory firm I worked with last quarter wanted their advisors to draft client portfolio summaries with Claude, pulling live data from the firm’s CRM and document store. Twelve months earlier, the same idea would have meant a custom integration per AI tool. One Claude-to-CRM bridge, one ChatGPT-to-CRM bridge, both built and maintained separately. The operations director had quietly killed the project once before.
This time the technology lead came back with a different shape. One MCP server, exposing safe read access to the CRM and the document store, would connect to whichever AI client the advisor preferred. The architectural saving was real. The architectural concern was also real. The MCP server now held the keys to two systems, and the firm needed governance for what queries it accepted and from whom. The owner saw the trade-off cleanly. MCP makes the right integration cheaper, and makes the governance question more pointed.
What is MCP?
The Model Context Protocol is an open standard for connecting AI models to external tools and data. An MCP server exposes tools the AI can call (search this, fetch that, write here) and resources it can read, like documents or database views. An MCP client, which lives inside Claude, ChatGPT, or another compatible AI tool, connects to the server, learns what is available, and uses those tools during a conversation.
Anthropic introduced the protocol in November 2024 and open-sourced the SDKs. The closest analogy is USB. Before USB, every peripheral had its own cable. After USB, one shape worked across devices. MCP does the same job for AI integrations.
Why does it matter for your business?
The point is portability. Before MCP, every AI integration was vendor-specific. OpenAI’s function calling, Anthropic’s tool use, and Google’s function declarations looked similar but were not compatible. Building a CRM integration meant three separate implementations. With MCP the integration is built once and works across any compatible AI client, which is why OpenAI, Microsoft, and Google have all adopted it despite competing with the vendor that created it.
For an SME, the practical implication is choice. If you have Claude in finance, ChatGPT in sales, and a copilot inside Microsoft 365, MCP lets the same CRM server serve all three. The other implication is governance. An MCP server with broad scope is now a single privileged surface, and the security model needs the same discipline as an API key you would not hand out twice.
Where will you actually meet it?
You will meet MCP in three concrete shapes. The first is vendor-built servers for the SaaS tools you already pay for. Notion, Slack, GitHub, Google Workspace, Microsoft 365, and Stripe all publish official MCP servers in 2026. These are the lowest-engineering, highest-value path for an SME, and the maintenance is somebody else’s problem. The setup is usually configuration, not code.
The second shape is custom MCP servers over your own systems. A bespoke server that exposes safe access to your internal database, your file store, or a line-of-business application is medium engineering and higher value, but it is also where the security model genuinely matters. Read-only first, write operations only after the audit trail is in place.
The third shape is hosted MCP gateways, like Zapier, Make, or specialist providers such as Merge, that bridge multiple SaaS tools through a single MCP endpoint. The lowest engineering of all, with a dependency on the gateway provider’s roadmap and pricing.
The ecosystem in mid-2026 is real but uneven. Vendor-built servers from major platforms are well-maintained. Community-built servers vary widely. Before connecting a community server to anything that touches customer data, check the maintenance cadence, the security disclosures, and whether the underlying platform’s API changes have been kept up with.
When to engineer for it now, when to wait
Engineer for MCP now if you have more than one AI client in production, or expect to within twelve months. The cost of building bespoke vendor-specific integrations does not amortise once a second AI tool enters the picture. Engineer for it now if you are connecting to a SaaS tool that already has a maintained vendor server. There is no good reason to build a custom integration when a working one ships in the box.
Wait if you have a single AI client serving a single workflow with no near-term plan to broaden. The optionality of MCP is cheap to preserve, but it is not urgent. Wait if your key business systems do not yet have a well-maintained server and you do not have the engineering capacity to build one. A poorly built MCP server is worse than none, because it gives you the false comfort that the integration question is solved.
The trap to flag sits at the edges. Some teams build custom MCP servers with excessive scope and treat them as developer convenience, forgetting they are now a privileged surface with no per-user authentication and no query logging. Mature deployments use scope-limited credentials, OAuth where the underlying platform supports it, server-side query logging, and a documented revocation path. Less mature ones discover the gap when an audit asks who ran what query through the AI last Tuesday.
Related concepts
Function calling, sometimes called tool calling, is the underlying mechanism that lets a language model trigger an external action during its reasoning. Without function calling, an AI cannot do anything beyond generate text. MCP is the standard interface that wraps function calling so the same tool definition works across vendors and clients.
An API is the general-purpose contract any two systems use to talk to each other. An MCP server typically sits in front of one or more APIs and exposes them in the shape an AI client expects.
OAuth is the authorisation pattern mature MCP servers commonly use. Instead of holding raw credentials, the server receives a time-limited token that lets it act on behalf of an authenticated user, which is more controllable than a static API key. Asynchronous events from MCP servers and the tools they wrap commonly arrive through webhooks, so the same signing-secret and replay-protection questions apply on that side.
An AI agent is an autonomous system that calls tools to complete multi-step tasks. Many agentic deployments in 2026 use MCP as the integration layer for those tools, which is why the agent conversation and the MCP conversation have started to converge.
Vendor lock-in is the wider question MCP is partly a response to. The protocol does not eliminate lock-in entirely, your prompts and configurations still vary by client, but it removes the integration layer as a switching cost, which is the largest hidden cost in the typical AI deployment.
The honest version of an MCP pitch in 2026 names the security model and shows you the audit trail. The marketing version skips both.



