Anthropic recently released their Model Context Protocol (MCP), an open standard describing a protocol for integrating external resources and tools with LLM apps. The release includes SDKs implementing the protocol, as well as an open-source repository of reference implementations of MCP.
The MCP is intended to solve the "MxN" problem: the combinatorial difficulty of integrating M different LLMs with N different tools. Instead, MCP provides a standard protocol that LLM vendors and tool builders can follow. MCP uses a client-server architecture; AI apps like Claude for Desktop or an IDE use an MCP client to connect to MCP servers which front datasources or tools. For developers who wish to start using MCP right away, there are SDKs for both Python and TypeScript as well as a growing list of reference implementations and community-contributed servers. According to Anthropic:
We’re committed to building MCP as a collaborative, open-source project and ecosystem, and we’re eager to hear your feedback. Whether you’re an AI tool developer, an enterprise looking to leverage existing data, or an early adopter exploring the frontier, we invite you to build the future of context-aware AI together.
The MCP spec defines a set of JSON-RPC messages for communication between Clients and Servers; these messages implement building blocks called primitives. Servers support three primitives: Prompts, Resources, and Tools; Clients support two: Roots and Sampling.
The Server primitives are for "adding context to language models." Prompts are instructions or templates for instructions. Resources are structured data which can be included in the LLM prompt context. Tools are "executable functions" which LLMs can call to retrieve information or perform actions.
Roots are an entry point into a filesystem and give Servers access to files on the Client side. Sampling lets Servers request "completions" or "generations" from a Client-side LLM. Anthropic says that Sampling could be used implementing agentic behavior by nesting LLM calls inside Server actions, but warns that "there SHOULD always be a human in the loop with the ability to deny sampling requests."
To showcase what developers can build with MCP, the documentation provides several examples and tutorials. The Quickstart example demonstrates how to use a Claude LLM to fetch weather forecasts and warnings. To do this, the developer creates a Python program to implement the MCP server. This program exposes a Tool primitive that wraps the calls to a public web service that returns weather data. The developer can then use an LLM via the Claude for Desktop app, which has a built-in MCP client, to call the MCP server and get the weather data.
Anthropic developer Justin Spahr-Summers joined a Hacker News discussion about MCP. When several users wondered if MCP would help solve the "MxN" problem, Spahr-Summers said "we definitely hope [it] will." When asked about how MCP is different from existing tool-usage in LLMs, he replied:
On tools specifically, we went back and forth about whether the other primitives of MCP ultimately just reduce to tool use, but ultimately concluded that separate concepts of "prompts" and "resources" are extremely useful to express different _intentions_ for server functionality. They all have a part to play!
The Model Context Protocol specification, documentation, and SDKs for Python and TypeScript are available on GitHub.