MCP: The Protocol That Connects AI to Everything
A deep dive into the Model Context Protocol and why it is the most important infrastructure standard for the AI agent era
I have spent the last several months building MCP servers, integrating them into agent workflows, and thinking deeply about what the Model Context Protocol means for the future of AI infrastructure. This is my comprehensive take on why MCP matters, how it works, and where it is going.
The Integration Problem
Every AI agent system hits the same wall: the model can reason, but it cannot act. It can tell you what API call to make, but it cannot make the call itself. It can write a deployment script, but it cannot execute it.
The traditional solution is custom integration code. Write a function that calls the GitHub API, wrap it in a tool definition, register it with your agent framework. Repeat for every tool, every API, every system you want the agent to interact with.
This approach does not scale. Every integration is bespoke. Every agent framework has a different tool definition format. Every model provider has a different function calling convention. The result is fragmentation: hundreds of incompatible integrations that cannot be shared across projects or teams.
MCP solves this by standardizing the interface between models and tools. One protocol. One format. Any model, any tool.
How MCP Works
The Model Context Protocol defines three core primitives:
Tools: Functions that the model can call. A tool has a name, a description, and a JSON schema defining its parameters. When the model decides to use a tool, it sends a request to the MCP server with the tool name and parameters. The server executes the function and returns the result.
{
"name": "create_pull_request",
"description": "Create a new pull request on GitHub",
"inputSchema": {
"type": "object",
"properties": {
"repo": { "type": "string", "description": "Repository name (owner/repo)" },
"title": { "type": "string", "description": "PR title" },
"body": { "type": "string", "description": "PR description" },
"head": { "type": "string", "description": "Source branch" },
"base": { "type": "string", "description": "Target branch" }
},
"required": ["repo", "title", "head", "base"]
}
}
Resources: Data that the model can read. Resources are like files or documents that the server makes available. The model can request a resource by URI, and the server returns its contents. This is useful for configuration files, documentation, database schemas, or any structured data the model needs.
Prompts: Templates that the server provides to structure the model's behavior. A prompt template can include instructions, examples, and context that guide how the model uses the server's tools and resources.
The communication between model and server uses JSON-RPC over stdio or HTTP with Server-Sent Events. The protocol is simple enough to implement in any language, and the transport layer is flexible enough for both local and remote deployments.
Why Standardization Matters
I have built integrations with custom code, with LangChain tools, with OpenAI function calling, and now with MCP. The difference that standardization makes is enormous.
Portability: An MCP server works with any MCP-compatible client. Build a GitHub server once, use it with Claude Code, use it with your custom agent, use it with any future model that supports MCP. No rewriting, no adaptation.
Ecosystem: When everyone uses the same interface, servers become shareable. My PostgreSQL MCP server works for anyone who needs PostgreSQL access from an agent. Community contributions compound.
Quality: Standardization enables shared tooling for testing, monitoring, and debugging. An MCP inspector can validate any server against the protocol specification, regardless of what the server does or what language it is written in.
Composability: Multiple MCP servers can be combined to give an agent access to a rich set of capabilities. Need GitHub, Slack, and AWS access? Connect three MCP servers. The agent sees a unified set of tools.
Building Enterprise MCP Servers
Through building LokiMCPUniverse, I have developed strong opinions about what makes an MCP server production-ready:
Authentication must be pluggable. Different environments use different auth mechanisms. A server that only supports API keys is not useful in an environment that requires OAuth or IAM roles. Build auth as a plugin system.
Error responses must be actionable. The model reads your error messages and decides what to do next. "Error: 500" is useless. "Error: Rate limit exceeded, retry after 30 seconds" tells the model exactly what to do.
Tool descriptions are your API documentation. The model decides whether and how to use a tool based entirely on its description and schema. Spend as much time on tool descriptions as you would on API documentation. Include examples, edge cases, and common mistakes.
Pagination is not optional. Any tool that can return a large result set needs pagination support. An agent that requests "list all issues" on a repository with 10,000 issues will overwhelm the context window. Pagination parameters with sensible defaults are essential.
Idempotency enables reliability. Agents retry. Networks fail. If a tool call succeeds but the response is lost, the agent will retry. If the tool is not idempotent, you get duplicate operations. Design for retry from the start.
The Competitive Landscape
MCP is not the only attempt at standardizing AI tool integration. OpenAI has its function calling format. LangChain has its tool definition standard. Various frameworks have their own approaches.
But MCP has several advantages:
Protocol-level standardization. MCP defines the communication protocol, not just the tool definition format. This means servers and clients can be developed independently and still interoperate.
Anthropic's backing. Anthropic developed and open-sourced MCP, and Claude Code is a native MCP client. Having a major model provider build their primary developer tool on the protocol is a strong signal.
Simplicity. The protocol is intentionally simple. A basic MCP server can be built in under 100 lines of code. This low barrier to entry encourages adoption and community contribution.
Transport flexibility. MCP works over stdio for local servers and HTTP/SSE for remote servers. This means the same protocol works for a local file system tool and a remote cloud service.
MCP in Agent Workflows
Here is where MCP connects to the broader agent work I am doing.
In an agent orchestration system, different agents need access to different tools at different phases of a workflow. The planning agent needs to read requirements from a project management tool. The implementation agent needs to interact with the codebase through git. The review agent needs to read code and leave comments. The deployment agent needs to interact with CI/CD systems.
MCP makes this composable. Each agent gets a set of MCP servers based on its role. The planning agent connects to the Jira and Confluence servers. The implementation agent connects to the Git and Code Analysis servers. The review agent connects to the GitHub server. Each agent sees only the tools it needs.
This is the principle of least privilege applied to AI agents. An agent should only have access to the capabilities it needs for its specific task. MCP servers provide natural boundaries for capability scoping.
The Road Ahead
MCP is young. The protocol is evolving, the ecosystem is growing, and the patterns are still being established. But the direction is clear.
I expect MCP to become the de facto standard for AI tool integration. The combination of simplicity, flexibility, and backing from a major model provider gives it the ingredients for widespread adoption.
For builders, the implication is clear: invest in MCP now. Build your tool integrations as MCP servers. They will be portable across models, shareable across teams, and composable into increasingly sophisticated agent systems.
The AI agent era needs plumbing. MCP is the plumbing. I am building as much of it as I can.