MCP protocol: how Anthropic is quietly reshaping the whole AI ecosystem
When Anthropic announced the Model Context Protocol (MCP), most people shrugged it off. "Another standard," they said. I read it, tried it, and now I'm convinced: MCP is the most important thing Anthropic has done since Claude itself.
What is MCP?
MCP is an open protocol that defines how an AI model talks to external tools and data sources. Think of it as USB for AI — one standard that works with everything.
Before MCP: Every AI tool had its own way of integrating. ChatGPT plugins worked differently from Claude tools, which worked differently from GPT function calling. Developers wrote a separate integration for each platform.
After MCP: You write an MCP server once and it works with Claude Code, Cursor, VS Code, and any other client that supports MCP.
Why I'm excited
I use Claude Code every day. And thanks to MCP, I can give Claude access to:
- The database — it queries data directly, no more copy-pasting SQL results
- GitHub — reads issues, opens PRs, comments on code reviews
- Figma — sees the design and generates matching code
- Sentry — sees error logs and helps with debugging
- My own APIs — anything I wrap in an MCP server
Example — yesterday I was debugging a production bug:
> claude "podívej se na Sentry, najdi poslední chybu
v /api/checkout a oprav ji"
Through MCP, Claude read the Sentry log, identified a null pointer exception, found the right file in my repo, fixed the bug, and wrote a test. The whole thing took 2 minutes.
Without MCP I would have had to: open Sentry → find the error → copy the stack trace → paste into Claude → get a solution → apply it by hand. 15 minutes instead of 2.
How MCP works (in plain terms)
MCP defines three concepts:
- Resources — data the AI can read (files, database records, API responses)
- Tools — actions the AI can perform (create a file, send a request, run a query)
- Prompts — pre-built instructions for specific tasks
An MCP server is a simple program that exposes these three things through a standardized interface. The client (Claude Code, Cursor…) connects and knows what it can use.
// Minimální MCP server pro přístup k databázi
const server = new McpServer({
name: "my-db",
version: "1.0.0",
});
server.tool("query", { sql: z.string() }, async ({ sql }) => {
const result = await db.execute(sql);
return { content: [{ type: "text", text: JSON.stringify(result) }] };
});
That's it. 10 lines and Claude has access to your database.
Why it's a bigger deal than plugins
ChatGPT plugins were a closed ecosystem. OpenAI decided what made it into the store, and the format was proprietary.
MCP is:
- Open — MIT licence, anyone can implement it
- Standardized — one format for all clients
- Local — the MCP server runs on your machine, your data doesn't leave
- Composable — you can hook up several servers at once
And the best part: everyone else is adopting MCP too. Cursor, VS Code Copilot, Windsurf — they're all adding MCP support. Anthropic built a standard that even the competition uses. That's the power of an open approach.
My MCP setup
I currently have these connected in Claude Code:
| MCP Server | What it does | |-----------|---------| | GitHub | Issues, PRs, code review | | PostgreSQL | Direct queries to the dev database | | Filesystem | Extended file access outside the repo | | Vercel | Deploy status, logs, env variables |
Setup is simple — you add the servers to the Claude Code config and that's it. No fiddly configuration.
What's missing?
Authorization and security. MCP doesn't yet have a robust auth model. Fine for local use, but enterprise deployments will need more. Anthropic is working on it — OAuth support is already in beta.
Discovery. There's no central registry of MCP servers. You search GitHub and hope. It'll get better over time — but right now it's a bit wild west.
Error handling. When an MCP server crashes, Claude sometimes doesn't handle it gracefully. It gets better with every release, but you'll hit edge cases.
Why you should keep an eye on MCP
Even if you don't use Claude Code today, MCP is changing the rules:
- AI assistants will get access to your tools — not through hacks, but through a standard
- Tool developers will write one MCP server instead of five integrations
- Enterprises will be able to safely connect AI to internal systems
With MCP, Anthropic is showing that it doesn't just want the best model — it wants to define how AI talks to the world. And so far they're pulling it off.