Robots Atlas>ROBOTS ATLAS
Infrastructure

MCP

Key innovation
MCP standardizes how AI applications connect to external data sources and tools, replacing per-pair custom integrations with a single open client-host-server protocol built on JSON-RPC 2.0.
Category
Infrastructure
Abstraction level
Pattern
Operation level
SystemAgent runtime

Components

HostContainer and coordinator: creates client instances, manages their lifecycle, enforces security and user consent policies, and coordinates integration with the language model.

The host process acts as the container and coordinator. It creates and manages multiple client instances, controls client connection permissions and lifecycle, enforces security policies and user consent requirements, and coordinates LLM integration and sampling.

Official

ClientMaintains an isolated, stateful 1:1 connection with a single server; handles protocol negotiation and capability exchange; relays messages bidirectionally.

Each client is created by the host and maintains a stateful, isolated 1:1 session with a specific server. It handles protocol negotiation, capability exchange, bidirectional message routing, and subscription management.

ServerProvides specialized context and capabilities through three primitive types: Resources, Prompts, and Tools; can run as a local process or remote service.

Servers provide specialized context and capabilities via three primitives: Resources (context data), Prompts (templated workflows), and Tools (executable functions). Servers operate independently with focused responsibilities and can be local processes or remote services.

Local (stdio) serverServer running as a local subprocess communicating via standard I/O streams.
Remote (HTTP/SSE) serverServer running as a remote service communicating via HTTP with Server-Sent Events for streaming.

Official

Primitives — server-sideThree capability types exposed by servers: Resources (contextual data), Prompts (instruction templates), Tools (executable functions).

Server-side primitives define the capabilities a server can offer: Resources provide structured data for the model's context window; Prompts are templated messages and workflow instructions; Tools are executable functions the model can invoke to retrieve information or perform actions.

Primitives — client-sideTwo capability types exposed by clients: Roots (filesystem access) and Sampling (server-initiated LLM completion requests).

Client-side primitives define capabilities the client exposes to servers: Roots give servers access to filesystem or URI boundaries on the client side; Sampling allows servers to request LLM completions, enabling agentic and recursive behaviors.

JSON-RPC 2.0 transportCommunication layer of the protocol; defines the message format and the request-response exchange mechanism between client and server.

MCP uses JSON-RPC 2.0 as its base message format. The transport layer is pluggable: initial versions used stdio streams; later versions added HTTP with Server-Sent Events (SSE). The protocol is stateful within a session.

Official

Capability negotiationCapability negotiation mechanism during session initialization; client and server declare their capabilities before exchanging operational messages.

During session initialization, clients and servers explicitly declare their supported features. This capability-based negotiation determines which protocol features and primitives are available for the session, ensuring forward and backward compatibility.

Implementation

Implementation pitfalls
Prompt injection vulnerability via untrusted serversCritical

Tool descriptions and server-provided annotations are untrusted by default. A malicious or compromised MCP server can attempt to inject instructions into the model's context via resource content or tool descriptions (prompt injection). Hosts must treat all server-provided content as untrusted.

Fix:Hosts must not automatically trust tool descriptions or resource content from servers. Implement sandboxing, allowlisting trusted servers, and require explicit user confirmation before tool invocation.
Absent server identity verification in early protocol versionsHigh

Early MCP versions (pre-2025-11-25) lacked a standardized mechanism for server identity verification, making it possible for a malicious process to impersonate a trusted server.

Fix:Use spec version 2025-11-25 or later, which introduced server identity. Implement additional authentication at the transport layer (e.g., OAuth 2.0 for HTTP transports).
Excessive context window consumption by tool declarationsMedium

When many MCP servers with many tools are connected simultaneously, the tool declarations inserted into the model's context window can consume a significant portion of the available token budget, reducing the space available for actual task context.

Fix:Limit the number of simultaneously connected servers and tools. Use lazy-loading patterns or selective tool exposure based on task context.
Sampling without explicit user consentHigh

If a host implements the Sampling primitive without requiring explicit user approval for each sampling request, servers can trigger LLM completions without user awareness, enabling potentially uncontrolled agentic behaviors.

Fix:Require explicit user consent for every sampling request. The MCP specification mandates that users must explicitly approve sampling and control what prompts are sent.

Evolution

2024
First public release of MCP
Inflection point

Anthropic open-sourced Model Context Protocol on November 25, 2024 (spec version 2024-11-05) with SDKs for Python and TypeScript and reference server implementations for Google Drive, Slack, GitHub, Postgres, Git, and Puppeteer.

2025
Adoption by OpenAI and Google DeepMind
Inflection point

In March 2025, OpenAI officially adopted MCP across its Agents SDK, Responses API, and ChatGPT desktop app. In April 2025, Google DeepMind confirmed MCP support in Gemini models. Over 1,000 community-built MCP servers were available by early 2025.

2025
Specification update 2025-11-25

The specification received major updates including asynchronous operations, statelessness support, server identity, Elicitation primitive (server-initiated user queries), and an official community-driven server registry.

2025
Protocol transferred to Agentic AI Foundation
Inflection point

In December 2025, Anthropic donated MCP governance to the Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation, co-founded by Anthropic, Block, and OpenAI.

Technical details

Hyperparameters (configurable axes)

Transport TypeHigh

The communication transport between client and server. Initial MCP supported stdio (local subprocess); later versions added HTTP with SSE for remote servers.

stdioLocal subprocess, communication via standard I/O
HTTP+SSERemote server, communication via HTTP with Server-Sent Events
Active PrimitivesHigh

Which server-side (Resources, Prompts, Tools) and client-side (Roots, Sampling, Elicitation) primitives are enabled. Capability negotiation at session start determines which are active.

Tools onlyServer exposes only executable tools, no resources or prompts.
Resources + ToolsCommon configuration for data-access and action servers
Sampling (server → LLM)Medium

Whether servers are permitted to request LLM completions from the client side. Requires explicit user consent and client declaration of sampling capability.

trueEnables agentic, recursive LLM behaviors initiated by the server
falseDefault conservative configuration; servers cannot trigger LLM calls.

Parallelism

Parallelism level
fully_parallel

Each client maintains an independent session with a single server; a host can run multiple parallel client-server sessions simultaneously with no interdependencies between them.

Scope
inference

Hardware requirements

Primary

MCP is a communication protocol and interface specification; it has no requirements or preferences regarding specific hardware. It runs on any environment capable of executing a process that handles JSON-RPC 2.0.