
MCP (Model Context Protocol): Connecting AI to External Tools
MCP lets AI read files, query databases, and call APIs through a standardized protocol. Think of it as USB for AI tool connections.

MCP lets AI read files, query databases, and call APIs through a standardized protocol. Think of it as USB for AI tool connections.
OSI 7 Layer is Theory. TCP/IP 4 Layer is Reality. Pragmatism beats Perfectionism.

ChatGPT answers questions. AI Agents plan, use tools, and complete tasks autonomously. Understanding this difference changes how you build with AI.

I actually used all three AI coding tools for real projects. Here's an honest comparison of Copilot, Claude Code, and Cursor.

Text to Binary (HTTP/2), TCP to UDP (HTTP/3). From single-file queueing to parallel processing. Google's QUIC protocol story.

When I first started using AI assistants, I kept hitting the same wall. "Can you read this file on my computer?" No. "Can you query my database?" No. "Can you check my GitHub issues?" No.
The AI was smart, but it had no hands. It lived in a sealed bubble, disconnected from the actual tools I needed it to interact with.
Then function calling appeared. You could tell the AI, "Hey, here are some functions you can call," and it would use them when needed. But there was a catch: every AI platform did it differently. OpenAI had their format, Anthropic had theirs, and if you wanted to connect the same tool to multiple AIs, you had to rebuild it each time.
Think about the world before USB. Every device had a different connector. Printers used parallel ports, keyboards used PS/2, mice had their own thing. The back of your computer looked like a museum of cable types. When USB arrived, everyone realized: "Oh, we can just standardize this."
That's what MCP is for AI. A standard protocol for connecting AI to external tools. Build it once, use it everywhere. Let me explain why this matters and how it actually works.
MCP (Model Context Protocol) is an open protocol created by Anthropic. It defines a standard way for AI applications (Hosts) to communicate with external tools (Servers).
The key insight is standardization. Before MCP, every AI app had its own way of connecting to tools. With MCP, if you follow the protocol, any AI app can plug in.
The architecture looks like this:
Host (Claude Desktop)
↓
Client (MCP Client)
↓ JSON-RPC
Server (File System MCP Server)
Think of it like electrical outlets. If the plug (protocol) matches, any appliance (tool) can connect. The Host is the building, the Client is the outlet, and the Server is the appliance.
MCP provides three fundamental building blocks:
Functions that the AI can directly invoke. Things like "read file", "query database", "call API".
// Define a tool in your MCP Server
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [{
name: "read_file",
description: "Read contents of a file",
inputSchema: {
type: "object",
properties: {
path: { type: "string" }
}
}
}]
};
});
// Handle tool execution
server.setRequestHandler(CallToolRequestSchema, async (request) => {
if (request.params.name === "read_file") {
const content = await fs.readFile(request.params.arguments.path, 'utf-8');
return { content: [{ type: "text", text: content }] };
}
});
This looks similar to function calling, but the difference is standardization. The same tool works with OpenAI, Claude, or any other MCP-compatible host.
Data sources that the AI can read from. Files, database records, API responses. If Tools are "actions", Resources are "data".
server.setRequestHandler(ListResourcesRequestSchema, async () => {
return {
resources: [{
uri: "file:///project/README.md",
name: "Project README",
mimeType: "text/markdown"
}]
};
});
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
const content = await fetchResource(request.params.uri);
return { contents: [{ uri: request.params.uri, text: content }] };
});
AI needs context. When you say "tell me about this project", it needs to pull information from somewhere. Resources standardize how that happens.
Reusable prompt templates. Pre-defined workflows for common tasks.
server.setRequestHandler(ListPromptsRequestSchema, async () => {
return {
prompts: [{
name: "review_code",
description: "Review code for best practices",
arguments: [{
name: "file_path",
description: "Path to the code file",
required: true
}]
}]
};
});
I'll be honest, I don't use Prompts much yet. Tools and Resources feel more intuitive. But I can see them being useful for teams sharing common workflows.
MCP uses JSON-RPC 2.0. The message format is straightforward:
// Request
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "read_file",
"arguments": { "path": "/project/README.md" }
}
}
// Response
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [{ "type": "text", "text": "# Project Title\n..." }]
}
}
Two transport options:
Most implementations use stdio. Claude Desktop runs MCP servers via stdio. It's simple and fast.
Here's a basic file system MCP server I built:
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { CallToolRequestSchema, ListToolsRequestSchema } from "@modelcontextprotocol/sdk/types.js";
import fs from "fs/promises";
const server = new Server({
name: "filesystem-server",
version: "1.0.0"
}, {
capabilities: {
tools: {}
}
});
// List available tools
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: [
{
name: "read_file",
description: "Read contents of a file",
inputSchema: {
type: "object",
properties: {
path: { type: "string", description: "File path to read" }
},
required: ["path"]
}
},
{
name: "list_directory",
description: "List files in a directory",
inputSchema: {
type: "object",
properties: {
path: { type: "string", description: "Directory path" }
},
required: ["path"]
}
}
]
};
});
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === "read_file") {
const content = await fs.readFile(args.path, "utf-8");
return {
content: [{ type: "text", text: content }]
};
}
if (name === "list_directory") {
const files = await fs.readdir(args.path);
return {
content: [{ type: "text", text: files.join("\n") }]
};
}
throw new Error(`Unknown tool: ${name}`);
});
// Start server
const transport = new StdioServerTransport();
await server.connect(transport);
Package this as an npm module, then add it to Claude Desktop's config:
{
"mcpServers": {
"filesystem": {
"command": "node",
"args": ["/path/to/filesystem-server/index.js"]
}
}
}
Now when I ask Claude to "read my project's README", it actually does. Feels like magic.
This confuses people. All three seem similar, so what's the difference?
Function Calling is a capability. The ability for AI to invoke functions. Both OpenAI and Anthropic support it, but their implementations differ.
Plugins are platform-specific extensions. ChatGPT Plugins only work with OpenAI. They're proprietary.
MCP is a protocol. A standard. Build once, run anywhere. Just like HTTP is the standard for web servers, MCP aims to be the standard for AI tool connections.
An analogy:
MCP servers I actually use:
Read and write local files. Claude can directly access my project files.
Query databases. Ask "how many users signed up last week" and get actual results.
Call GitHub API. "Show me recent issues", "create a PR" — it just works.
Semantic codebase navigation. "Find all callers of this function" returns precise results.
Because it's a standard protocol, once I configure these servers, they work across all MCP Hosts. The same setup I use in Claude Desktop also works in my VS Code extension.
Anthropic released MCP as open source. The community response has been strong:
This is really the core of AI agents. It's not just about making AI smarter. It's about connecting AI to the real world. Reading data, using tools, taking actions.
MCP standardizes that. Just like USB standardized peripherals, MCP standardizes AI tools. I expect every major AI app will eventually support MCP. Or they'll stay walled off and lose the network effect.
Standards win because of network effects. More MCP servers make MCP Hosts more valuable. More Hosts make building servers more attractive. It's a virtuous cycle.
Understanding MCP changed how I think about AI development. AI isn't an isolated model anymore. It's an agent connected to tools.
Key takeaways:
What surprised me is how easy it is to build MCP servers. You can wrap existing libraries with the MCP protocol, and suddenly AI can use them.
Going forward, I think "AI integration" will become table stakes. Every SaaS, every tool will provide an MCP server. Then AI becomes genuinely integrated with our entire work environment.
That's the AI agent we actually wanted. Not just a chatbot, but a coworker that can do real work. MCP is paving that path.