arrow-left

Back to insights

One protocol. Every major AI vendor. Now what?

One protocol. Every major AI vendor. Now what?

Every few months, a new acronym sweeps through enterprise AI conversations and lands on every executive's desk. This year it's MCP.

In the last quarter, we've had the same conversation with a dozen clients. A board member forwards an article. A CTO sees it trending on LinkedIn. Someone in the leadership team asks the question that always follows: "Should we be doing this?"

The honest answer isn't yes or no.

It's "probably, eventually, but not yet and not for the reasons you've been told."

What MCP actually is

The Model Context Protocol is an open standard, introduced by Anthropic in late 2024, that defines a common way for AI applications to connect to tools and data. Within eighteen months, it's been adopted by every major AI vendor: Anthropic, OpenAI, Google, Microsoft, AWS. As of early 2026, roughly 78% of enterprise AI teams report at least one MCP-backed agent in production. The public registry of MCP servers has grown nearly eightfold year-over-year.

The shorthand the industry has settled on is "the USB-C of AI."

It's a useful analogy. USB-C didn't make laptops faster. It didn't give them new capabilities. It standardised the plug, so that any device with the port could talk to any other device with the port. MCP does the same for AI. It standardises how an AI application connects to the systems it needs to do useful work.

That's it.

The problem it actually solves

The reason MCP matters is mathematical, not magical.

Without a standard, every AI tool needs a custom integration with every system it touches. Five AI tools and ten internal systems means fifty custom integrations to build and maintain. Switch from one model to another, and you rebuild the integration layer. Add a new SaaS vendor, and you teach every model about it separately. The cost compounds.

MCP turns that into addition. Five tools plus ten systems means fifteen integration points, each built once and reused everywhere. It's the same shift that REST brought to web APIs in the 2000s, applied to the agent layer.

For organisations with one AI use case in one tool, this changes nothing. For organisations running multiple models, building agents that compose across teams, or integrating AI into a sprawling SaaS stack, it changes the unit economics of every future project.

Where it's already delivering value

Three patterns are emerging in our client work.

Multi-model resilience

Most companies no longer run on a single model. One handles general tasks, another keeps sensitive data on-premise, a third serves as a fallback. Without a standard, every model means rebuilding the integration layer. With MCP, that work happens once. Useful when you need:

portability across vendors

on-premise options for sensitive data

fallback paths for cost or availability

Shared capabilities across teams

The finance team's expense agent and the sales team's CRM agent often need the same thing. Without a standard, that capability is built twice. With MCP, it's published once and reused everywhere. The compounding effect across a portfolio of agents is significant. Common shared building blocks:

employee and customer lookups

document and knowledge retrieval

approval and notification flows

Vendor inevitability

Roughly 30% of enterprise application vendors are expected to ship their own MCP servers in 2026 (Forrester). Your CRM, ERP and ticketing systems are starting to expose MCP interfaces natively. Companies without a client-side strategy will integrate the hard way while competitors integrate by default. Already moving in this direction:

Where it doesn't help (yet)

We've also seen MCP introduced too early, and the results are predictable.

A standard makes integration cheaper. It doesn't make integration valuable. If the underlying data is fragmented, undocumented, or untrustworthy, MCP exposes those problems to AI agents faster, and at higher stakes. We've seen organisations stand up an MCP server in front of a customer database that nobody has cleaned in a decade, then act surprised when the agent surfaces inconsistencies it shouldn't have surfaced.

A standard also doesn't solve governance. The MCP specification doesn't tell you which actions an agent should be allowed to take, who's accountable when it gets one wrong, or how to audit the decision after the fact. Those questions sit one layer above the protocol, in your organisation, your policies, your engineering culture.

And the standard itself is still moving. Authentication patterns for remote MCP servers only stabilised in mid-2025. Audit trails, single sign-on, and enterprise gateway behaviour are explicitly flagged on the 2026 roadmap as priorities, which is another way of saying they're acknowledged gaps. Companies building on MCP today are accepting a non-trivial amount of change through the year. That's fine. REST went through the same thing. But pretending otherwise will make your engineering team unhappy.

  • You're running more than one AI tool in production.
  • Your AI use cases are starting to overlap.
  • Your data is ready to be exposed to an agent.

What this means for your roadmap

The right question isn't "should we adopt MCP?" It's "what would have to be true for adopting it to matter?"

For most organisations, the answer comes down to three readiness signals. When all three are present, MCP starts to pay off quickly. When they aren't, adopting it early creates more problems than it solves.

You're running more than one AI tool in production.

The economics of MCP only show up at scale. If you're maintaining one model, integrated with one system, a direct API call works fine and a standard adds nothing. But once you're running two or three models in parallel, the cost of maintaining duplicate integration layers becomes visible quickly. That's the point where MCP starts saving real engineering time.

Your AI use cases are starting to overlap.

When the finance team's agent and the sales team's agent both need to look up the same employee, you have a shared capability problem. Solve it twice and you'll solve it three times. Solve it once, behind a standard, and every future agent inherits the work. The compounding only happens if you have enough agents to compound across.

Your data is ready to be exposed to an agent.

This is the one most often skipped. MCP makes it faster to connect AI to your systems. It does nothing to make the underlying data trustworthy. If your customer records are inconsistent, your reports disagree with each other, or nobody can answer "which number is correct?", a standard will surface those problems faster and at higher stakes. Fix the foundation before you build the plug.

If all three signals are present, MCP is worth piloting now. If one or two are missing, it's worth understanding, but not yet worth investing in.

The bigger pattern

There's a recurring mistake in enterprise technology, and MCP is just the latest example of it: adopting tools before deciding what's worth building.

Every new AI standard, model, or platform will trigger the same question from someone in the room: "Should we be doing this?" And every time, the answer will start somewhere else, with what you're actually trying to build, and what problem an AI agent is genuinely better at solving than the alternatives.

The companies that will win with AI in the next three years aren't the ones who adopt every protocol first. They're the ones who know what they're trying to build, treat the standards layer as plumbing, and invest in it when, and only when, the plumbing starts to matter.

AI is a multiplier. Not a fix.

MCP makes the integration layer cheaper. It doesn't make the strategy easier.

That part is still on you.

Thinking about where AI belongs in your roadmap?

We'd be happy to help!

Book a meeting

Discuss your idea directly with one of our experts. Pick a slot that fits your schedule!

Book now