Agents Tech 16 Jul 2025

MCP Protocol: what it is and why your AI stack needs it

The problem with most AI deployments isn't the AI. It's the plumbing. An AI assistant that can reason about complex business problems but can't read your CRM, query your database, or update a ticket is, in practice, an ...

Diagram showing AI tools connected via the Model Context Protocol

The problem with most AI deployments isn't the AI. It's the plumbing.

An AI assistant that can reason about complex business problems but can't read your CRM, query your database, or update a ticket is, in practice, an expensive autocomplete. The intelligence is there. The integrations aren't — or rather, every integration is a custom one-off, built against a specific API, maintained separately, and rebuilt from scratch each time you switch tools.

MCP — the Model Context Protocol — is the answer to that problem. Not a theoretical one. A working one, with production-ready tooling, broad platform adoption, and a straightforward implementation path for any organisation that wants to give AI systems real operational capability.

What MCP actually is

MCP is an open standard, originally published by Anthropic and now broadly adopted, that defines how AI systems communicate with external tools and data sources. The core idea: instead of each AI platform building proprietary connectors to every external system, there's a shared protocol both sides implement once.

An MCP server exposes capabilities — tools an AI can call, resources it can read, prompts it can reference. An MCP client (the AI assistant) discovers and calls those capabilities through a standardised interface. The AI doesn't need to know whether your CRM is Salesforce or HubSpot or something built in-house. It calls the tool, the tool returns the result.

The USB analogy is often used here, but the more precise comparison is HTTP. HTTP didn't standardise what websites contain — it standardised how clients and servers communicate. MCP does the same for AI and the systems it needs to work with. The protocol handles the communication layer. The integrations — what each MCP server actually exposes — are built once and reused across every AI client that speaks the protocol.

Why it changes the AI ROI equation

Before MCP, deploying AI in a business context typically meant one of two things: either you used a platform's built-in connectors (which existed for major SaaS tools and nothing else), or you built custom integrations that tied your AI to a specific model and had to be rebuilt every time either side changed.

The result: most "enterprise AI" was actually AI-adjacent automation — rule-based pipelines with an LLM bolted on, not systems where the AI could actually reason across your business data and act on it.

MCP decouples the AI layer from the integration layer. You build an MCP server for your internal system once. Any MCP-compatible AI client — Claude today, whatever comes next — can use it immediately. You're not locked to a model or a platform. Your integrations outlast your vendor decisions.

The cost reduction is real: instead of custom glue code per AI tool per data source, you maintain one MCP server per data source. The surface area for integration work collapses significantly.

Platform adoption: where things stand in 2026

Anthropic's Claude has the most mature MCP implementation — it's native across the desktop app, API, and coding tools. OpenAI added MCP support across ChatGPT and the API in 2025. Microsoft Copilot is integrating it through Copilot Studio. The open-source tooling ecosystem — Ollama, LM Studio, LangChain, LlamaIndex — has moved quickly.

Not all clients implement the full MCP specification equally. The protocol covers tools (functions the AI can call), resources (documents and data the AI can read), and prompts (reusable instruction templates). Agentic frameworks vary in what they support:

Framework Tools Resources Prompts Full discovery
LangChain / LangGraph
LlamaIndex
CrewAI
Google ADK

If your use case requires full protocol support — including automatic discovery of resources and prompts, not just tool calling — LangChain/LangGraph and LlamaIndex are the production-ready choices. Both implement the complete MCP specification and expose all server capabilities to the agent without manual configuration.

Security and data residency: the European context

MCP doesn't make every AI integration safe by default — but it creates the architectural foundation for safe integrations. Two aspects matter particularly for European deployments.

First, data residency. An MCP server running on your infrastructure means data never leaves your environment unless you explicitly configure it to. The AI client calls the server; the server processes the request locally and returns results. For use cases involving personal data, financial data, or commercially sensitive information, this is the difference between a GDPR-compliant architecture and one that requires continuous monitoring of cross-border data flows.

Second, authentication. The MCP specification includes OAuth 2.1 support for server authentication. This means the AI system accesses your business tools with the same credential management you'd apply to any service-to-service integration — not as a superuser with unconstrained access, but with defined scopes and auditable access patterns.

Neither of these is a guarantee — implementation quality matters. But the protocol provides the primitives. Whether you're building toward EU AI Act compliance documentation or simply trying to answer your legal team's questions about where customer data goes, MCP gives you a coherent architecture to point to.

What an MCP-connected workflow looks like in practice

To make this concrete: we built the MCP server underlying this site's multilingual workflow — it exposes 16 HubSpot API operations as MCP tools that Claude can call directly. Blog post translation that used to require navigating four browser tabs and manually reconstructing HTML now happens in a single conversation. The server runs locally. HubSpot credentials stay in a local environment file. Nothing goes to Anthropic's servers except the text content Claude needs to reason about.

That's the practical pattern: identify a workflow that involves switching between your AI assistant and an external system, build an MCP server for the external system, collapse the workflow into one conversation. The integration work is a one-time investment. The workflow improvement is permanent.

If you haven't read our post on the Normattiva MCP extension for Italian legal research, that's another concrete example — the same pattern applied to Italy's official legislative database, eliminating the manual search-download-cross-reference cycle for legal and compliance teams.

Where to start

The MCP ecosystem has good tooling and documentation. Anthropic publishes a full MCP specification and SDK in TypeScript, Python, and other languages. There's an actively maintained directory of existing MCP servers — for common tools like GitHub, Slack, databases, and file systems, you likely don't need to build from scratch.

For custom internal systems, the development surface is small: you implement a handful of handlers that describe what the server can do and how to call it. A basic MCP server for an internal API typically takes a day to build and makes it permanently available to any AI client that speaks the protocol.

The question worth asking: which workflows in your organisation involve a person switching between an AI assistant and another system? Those are the candidates. Start with the one that consumes the most time and involves data sensitive enough to warrant keeping it local.

Want to connect your AI assistant to your actual business systems?

We design and build MCP servers for internal systems — CRMs, ERPs, databases, proprietary APIs — and integrate them with Claude or any MCP-compatible AI client. If you're working through the architecture of an AI agent deployment, we can scope the integration layer alongside it.

Let's talk about your integration →
Lino Moretto
Lino Moretto
RAAS Impact

Drawing from over 20 years of expertise as Fractional innovation Manager, I love bridging diverse knowledge areas while fostering seamless collaboration among internal departments, external agencies, and providers. My approach is characterized by a collaborative and engaging management style, strong negotiation skills, and a clear vision to preemptively address operational risks.

No guesswork.
No slide decks.
Just impact.

Ready to move from AI hype to a working system? In a free 30-minute call we'll identify your highest-impact use case and tell you exactly what it takes to get there.

No upfront cost · Italy · Malta · Europe · English & Italian