
MCP went from 100,000 monthly downloads to 97 million in 18 months. The agent internet is not coming. It is already being built — and the protocol decisions being made now are infrastructure decisions for the next decade.
In November 2024, Anthropic quietly released a technical specification called the Model Context Protocol.
Most teams looked at it, noted it as interesting, and moved on.
Twelve months later, 97 million developers were downloading it every month. OpenAI, Google, Microsoft, and AWS had all adopted it. It had been donated to the Linux Foundation with backing from nearly every major technology company on the planet. And a second protocol — Google's Agent-to-Agent standard — had emerged alongside it, reached production maturity, and attracted 150 organizations including Deutsche Bank, SAP, and ServiceNow.
The agent internet is being built. The infrastructure decisions happening right now will shape how enterprise AI systems are architected for the next decade. And most enterprise teams are watching it happen without understanding what it means for the systems they are building today.
This is what it means/
The Problem These Protocols Are Solving
Before MCP, connecting an AI agent to external tools and data required custom integration work for every combination of model and tool. If an enterprise had ten AI applications and one hundred internal tools, the integration surface was potentially one thousand bespoke connectors — each one custom-built, custom-maintained, and fragile.
BCG described the consequence precisely: without a standard protocol, integration complexity rises quadratically as AI agents spread through an organization. With a standard, it increases linearly.
That is not an abstract mathematical distinction. For an enterprise running twelve AI agents — the current average according to Google's 2026 AI Agent Trends Report — the difference between quadratic and linear complexity is the difference between a manageable engineering problem and one that grows faster than any team can support.
MCP solved the N×M problem. Instead of building N integrations for each of M tools, every agent implements the MCP client protocol once and every tool implements the MCP server protocol once. One connector, standard interface, any agent talking to any tool. The USB-C analogy that circulated at launch was accurate enough that it stuck.
The adoption numbers validate the problem being real. MCP launched with roughly 100,000 monthly SDK downloads in November 2024. By March 2026 that number was 97 million — a 970x increase in eighteen months. OpenAI adopted it in March 2025. Google DeepMind confirmed support in April 2025. Microsoft integrated it into Copilot Studio in July 2025. AWS in November 2025.
In December 2025, Anthropic donated MCP to the Agentic AI Foundation under the Linux Foundation — co-founded by Anthropic, Block, and OpenAI, with Google, Microsoft, AWS, and Cloudflare as supporting members. That move transformed MCP from a vendor's proposal into neutral infrastructure. It now sits alongside Kubernetes and PyTorch in the Linux Foundation's portfolio. For enterprise architecture decisions, that governance change matters as much as the technical specification.
What MCP Actually Is — and What It Is Not
The mistake most teams make when they first encounter MCP is treating it as a new API standard. It is not.
MCP is a protocol for the relationship between an AI agent and its environment — the tools, databases, APIs, and data sources that the agent needs to do its work. It defines how the agent discovers available tools, how it calls them, how it receives results, and how context flows through the interaction. It is the nervous system layer: connecting the AI model to everything outside itself.
What MCP does not address is how agents communicate with other agents. That is a different problem. And it is the problem that Google's A2A protocol was designed to solve.
The distinction matters more than most teams initially recognize. An enterprise AI system is rarely a single agent. It is an ecosystem of specialized agents — one that handles customer inquiries, one that consults inventory, one that triggers fulfillment, one that escalates to human review. These agents need to discover each other, delegate tasks, share state, and coordinate across organizational and vendor boundaries. MCP handles none of that. That is A2A's domain.
The framing that clarifies this most cleanly comes from Google's own documentation. In a car repair shop analogy: MCP is the protocol connecting each mechanic to their tools — the lift, the wrench calibration system, the diagnostic scanner. A2A is the protocol that lets the mechanics talk to each other, delegate work, and coordinate with parts suppliers. Both are essential. Neither substitutes for the other.

MCP connects agents to tools. A2A connects agents to agents. They are not competing standards. They are complementary layers of the same infrastructure — and both are already in production at enterprises that will not wait for you to catch up.
Are you already using MCP in production? Hit reply with one sentence on where you are in implementation. I am tracking where enterprise teams actually are versus where the coverage says they should be — and the gap is significant.
The "Protocol War" Framing Is Wrong
When Google announced A2A in April 2025, the immediate industry reaction was to frame it as a protocol war — Anthropic's MCP versus Google's A2A. That framing was wrong then and it is clearly wrong now.
Both protocols are governed by the same foundation. MCP was donated to the AAIF in December 2025. A2A was donated to the Linux Foundation in June 2025. Both now operate under neutral governance with overlapping institutional backing. IBM's ACP — at the time A2A's most credible competitor — merged into A2A under the Linux Foundation in August 2025. The competitive protocol landscape consolidated in a single year.
What actually emerged is a two-layer stack that addresses the complete agent integration problem:
MCP at the bottom: each agent connects to its tools and data through a standard interface. Any MCP-compatible tool can be used by any MCP-compatible agent without custom wiring.
A2A at the top: agents discover each other through structured Agent Cards, delegate tasks in a standard format, track task lifecycle, and return results — all without needing to know what framework or vendor built the agent they are coordinating with.
Together, they form the backbone of what will be the enterprise agent internet. The analogy to TCP/IP and HTTP is not hyperbolic — it is the closest structural parallel. One protocol handles the transport layer, one handles the application layer. Both are necessary. Neither is optional in a mature deployment.
What the Enterprise Deployment Data Shows
The production results from early adopters are real and specific enough to be useful.
Block, which co-developed MCP with Anthropic, built an internal MCP-connected agent called Goose that thousands of employees use daily. Their reported outcomes: 50 to 75 percent time savings on common tasks. Work that took days completes in hours. The agent connects to Snowflake, GitHub, Jira, Slack, and internal APIs through MCP — turning what was previously a six-tool switching exercise into a single interface.
Microsoft's Sales Development Agent, connecting to Dynamics 365 through MCP, produced a 15.1 percent increase in lead-to-opportunity conversion rates across 61,734 leads tracked between January and November 2025. That is not a benchmark. It is a business outcome from a production deployment.
A2A's production footprint is more recent but already substantial. v1.0 reached the bar required for enterprise deployment in early 2026. The 150 organizations running it in production include financial institutions like Deutsche Bank alongside enterprise software vendors like Salesforce, SAP, and ServiceNow — organizations whose participation signals that the protocol is stable enough for high-stakes workflows.
Cornell University's research on coordinated multi-agent systems found that carefully designed multi-agent architectures achieve up to 70 percent higher goal success rates compared to single-agent setups. The protocol infrastructure that makes that coordination possible — reliable discovery, standard task delegation, predictable state management — is what A2A provides.
The Deployment Layer — every Tuesday. One deep-dive on enterprise AI architecture, agent systems, and responsible AI governance. Built for the practitioners and leaders who make the real decisions. Subscribe free → thedeploymentlayer.com
What This Means for Enterprise Architecture Decisions Today
Here is the practical question every enterprise AI team needs to answer: what should we actually build against?
The answer is less complicated than the protocol landscape makes it appear.
Build against MCP now. The standard has achieved critical mass — 10,000 public servers, 97 million monthly SDK downloads, support from every major AI platform and a rapidly expanding marketplace of MCP-compatible enterprise tools. Gartner projects that 75 percent of API gateway vendors and 50 percent of iPaaS vendors will have MCP features by end of 2026. If you are building agent systems that need to connect to tools, databases, and internal APIs, MCP is not a bet — it is infrastructure, like HTTP.

The immediate business consequence is sharper than most teams expect. Forrester projects that 30 percent of enterprise application vendors will launch their own MCP servers in 2026. That means the enterprise software landscape is bifurcating: MCP-compatible products that AI agents can discover and use directly, and non-MCP products that become invisible to those agents — data silos that require custom integration every time an agent needs them.
If your organization's critical systems — ERP, CRM, internal databases, compliance tools — do not have MCP servers, your agents cannot use them without custom wiring. That custom wiring is exactly what MCP was designed to eliminate. The organizations building MCP servers for their internal tools now are building integration leverage for every agent deployment that follows.
Track A2A seriously, implement when multi-vendor agent coordination becomes a requirement. A2A's production footprint is real but narrower. The protocol is stable. The ecosystem is consolidating. The question for most enterprises is not whether A2A will become necessary — it is when. For organizations running multi-department, multi-vendor agent deployments, that question is arriving faster than most IT roadmaps have accounted for.
The security surface requires explicit attention at both layers. MCP's growing attack surface is not theoretical — security researchers published analysis of prompt injection vulnerabilities and tool permission exploits in April 2025. A critical RCE vulnerability in Anthropic's MCP Inspector was patched by June 2025. The community response has been strong: tools like MCP-Scan audit servers for security issues and the Linux Foundation governance structure creates accountability for security maintenance. But enterprise teams implementing these protocols need governance-grade security review as part of the deployment architecture, not as an afterthought.
I covered the orchestration layer in Week 1 — the Orchestration Trap is where agent pipeline failures begin. MCP and A2A are the connective tissue that orchestration layers operate through. Getting the protocol foundation right is a prerequisite for everything discussed in that issue.
The Governance Dimension Nobody Is Pricing In
For enterprises in regulated industries, MCP and A2A are not just architectural decisions. They are governance decisions.
Every tool call an agent makes through MCP is an action taken on behalf of the organization — reading data, triggering processes, potentially modifying records. Every inter-agent task delegation through A2A is a chain of actions that may touch regulated data, trigger compliance-relevant decisions, or produce outputs that require explainability.

The protocol infrastructure provides the hooks for governance — structured logging of tool calls, observable task delegation chains, traceable agent interactions. But hooks are not governance. Enterprise teams need to build governance-grade instrumentation at the protocol layer: logging every MCP tool invocation, capturing A2A task delegation chains, and maintaining audit trails that satisfy the explainability requirements of FinTech, Healthcare, and Legal regulators.
The organizations that understand MCP and A2A as governance infrastructure — not just connectivity infrastructure — are the ones that will deploy reliably in regulated environments. The organizations that treat these as plumbing will discover the governance gap when a regulator asks them to explain what their agent did and why.
Running AI in a regulated environment? How are you logging MCP tool calls for audit purposes right now? This is the question I keep getting in replies — and the answer shapes an upcoming governance deep-dive. Reply and tell me where you are.
What Changes in the Next 18 to 36 Months
The consolidation that happened in 2025 — MCP, A2A, and ACP converging under Linux Foundation governance — is the structural foundation for what comes next.
The next layer being built is discovery infrastructure: registries and marketplaces where organizations can find MCP-compatible tools and A2A-compatible agents without knowing in advance what exists. The official MCP Registry, PulseMCP, and MCPMarket.com are early versions of this infrastructure. What emerges over the next 24 months will be closer to an app store for enterprise AI capabilities — agent-discoverable, standards-compliant, governance-reviewed.
The organizations building MCP servers for their internal systems now are building assets that will become discoverable in that marketplace. The organizations waiting will be sourcing capabilities from a catalog they had no part in shaping.
The agent internet is not coming. It is being built. The protocol wars are over — and they ended faster than anyone expected, with cleaner resolution than most technology standard battles produce.
The question for enterprise teams is not which protocol wins. It is whether you are building on the foundation being laid, or watching others build on it.
The Bottom Line
MCP and A2A are not competing standards. They are complementary layers of the same infrastructure.
MCP connects agents to tools. A2A connects agents to agents. Together they form the connectivity backbone of enterprise multi-agent systems for the next decade.
The standard has been set. The governance is in place. The production deployments are real.
The only remaining question is how long your organization waits before building on it.
Building on MCP or A2A right now? Hit reply and tell me what your implementation looks like and where you hit the first wall. These responses shape future technical deep-dives.
New here? Every Tuesday, The Deployment Layer publishes one deep-dive on enterprise AI architecture, agent systems, and responsible AI governance. Subscribe free at thedeploymentlayer.com
Forward this to one architect or CTO in your network who is still treating these protocols as "things to evaluate later." The window for treating them as optional is closing.
Next Tuesday: Why Your RAG Pipeline Is Lying to You — And How to Fix It. If this week was about the connective infrastructure, next week is about what happens when the data flowing through that infrastructure is silently wrong. Subscribe → thedeploymentlayer.com
Are you building MCP servers for your internal enterprise systems yet? If not — what is the blocker? I read every response.