MCP 201: What You Only Learn After Building an MCP Server Yourself

When I first came across the Model Context Protocol (MCP), I thought I understood it pretty well.
I had already experimented with agentic systems that used MCP-style tool calling — exposing functions, wiring integrations, and watching agents invoke capabilities through protocols like JSON-RPC. At a high level, the concepts made sense.
But I realized something important only when I went one step further: building an MCP server myself.
Not using any SDK.
Not using any MCP framework.
Just a simple Spring Boot controller implementing the protocol directly.
I added a couple of realistic tools, connected it to my application (Saga), and tested everything end-to-end using an agentic client — Cline.
And that’s when I discovered that there’s a big difference between:
- understanding MCP in theory
- and understanding MCP deeply enough to implement it correctly
This post is my attempt at an MCP “201-level” guide — the set of practical lessons that became clear only after building and testing a real MCP server hands-on.
1. MCP Looks Simple… Until the Client Actually Calls You
On paper, MCP feels straightforward:
- Expose tools over stdio/http (Streamable / SSE)
- Accept JSON-RPC calls
- Return results
So I started with the smallest possible Spring Boot endpoint:
@PostMapping("/mcp")
public ResponseEntity<JsonNode> handle(@RequestBody JsonNode req) {
String method = req.path("method").asText();
if ("tools/list".equals(method)) {
return ok(toolsList());
}
if ("tools/call".equals(method)) {
return ok(toolCall(req.path("params")));
}
return error("Unknown method");
}
It worked.
And then the client (Cline) started calling it.
That’s where MCP stopped being theoretical.
2. Streamable HTTP Is HTTP-First, Not Streaming-First
The name “Streamable HTTP” misleads many people.
I assumed:
Streamable HTTP = streaming responses
But in practice:
- It’s normal HTTP POST + JSON-RPC by default
- Streaming is optional
- Most tool calls are plain request/response
The real insight:
Streamable HTTP replaces the old “SSE transport” model, but it does not force streaming.
For most enterprise integration tools (like Saga), you’ll likely never need streaming at all.
At first I declared tools like this:
{
"name": "add",
"description": "Adds two numbers"
}
And then Cline called my tool with:
{ "num1": 100.3, "num2": 8 }
My server expected {a, b}.
So the result was garbage.
That was the moment I learned:
Tool descriptions are not enough. The schema is the real contract.
Good MCP servers must provide strong schemas:
- explicit properties
- required fields
- examples
- additionalProperties
"inputSchema": {
"type": "object",
"properties": {
"ccy1": {
"type": "string",
"examples": ["USD"]
},
"ccy2": {
"type": "string",
"examples": ["INR"]
}
},
"required": ["ccy1","ccy2"],
"additionalProperties": false
}
Schemas are not just validation…
They are LLM guidance.
4. Clients Don’t Agree on Output Types
One of my biggest surprises:
I returned this:
{
"content": [
{ "type": "json", "json": { "mid_rate": 101.3 } }
]
}
Perfectly reasonable, right?
Cline rejected it completely.
It only accepted:
- text
- image
- audio
- resource
So I had to fallback to:
{
"content": [
{
"type": "text",
"text": "{\"mid_rate\":101.3}"
}
]
}
Lesson:
The MCP spec tells you what is allowed. The client tells you what works.
Start simple. Text is universal.
5. Schemas Guide the LLM — They Don’t Enforce Anything
Many developers assume:
If a schema says a field is required, the client will obey.
Reality:
- The LLM “tries”
- The client mostly passes through
- Mistakes still happen
Therefore:
Server-side validation is mandatory.
In my example:
if (ccy1 == null) {
return toolError("Missing required field: ccy1");
}
The server is the only place correctness is guaranteed.
6. Session IDs Exist Only If the Server Issues Them
I expected session IDs to show up in JSON-RPC payloads.
They don’t.
In Streamable HTTP, session identity is an HTTP header:
And here’s the key:
The server decides whether sessions exist at all.
If you don’t return a session id during initialize, the client never sends one back.
Stateless is the default.
Statefulness is opt-in.
7. Why Stdio Still Dominates (and Isn’t “Childish”)
This one deserves honesty.
When I first read MCP, I thought:
Why are we talking about stdio? This feels childish.
Now I understand why stdio is everywhere.
Because stdio is not childish.
Stdio is secure by design.
Why stdio remains the default transport:
- No open ports
- No auth headaches
- No CORS
- No network exposure
- Runs as a sandboxed subprocess
- Perfect for local IDE integrations
Stdio is basically:
Unix philosophy applied to agent tools.
Streamable HTTP is for distributed tool microservices.
Stdio is for local plugin processes.
Both are real-world patterns.
Many people think:
Resources are just tools that return data.
Not true.
Tools are actions:
Resources are retrievable objects:
saga://endpoints
saga://batch/123/status
Tools are verbs. Resources are nouns.
This distinction becomes important as soon as you build more than toy tools.
9. MCP Is Not a REST Framework
One subtle misconception:
Developers treat MCP servers like REST APIs.
But MCP is not about CRUD.
MCP is an agent capability layer.
This is one of the biggest “hello-world to real-world” transitions in MCP.
Hello-world MCP servers expose tools like:
- “add two numbers”
- “echo a string”
- “lookup weather”
But real-world MCP servers expose operational capabilities:
- “initiate the end-of-day settlement batch and track completion”
- “route this payment message to the correct downstream network”
- “recover a failed integration workflow from the last checkpoint”
- “escalate an exception case with audit context for human review”
The protocol doesn’t change — but the maturity of the tools does.
MCP becomes truly powerful only when tools move beyond demos and start representing real business actions an agent can reason with.
The mental model is different.
Another practical insight:
A server with 100 tools is worse than one with 10 sharp tools.
Too many tools:
- confuse selection
- dilute grounding
- increase hallucination
MCP rewards minimal, precise capability surfaces.
11. MCP Servers Start Stateless… Then Become Stateful
Hello-world MCP servers are stateless.
Real enterprise MCP servers evolve into:
- session-scoped permissions
- tenant-specific tool lists
- async workflow progress
- runtime resources
MCP 101 is calling a tool.
MCP 201 is building an ecosystem around it.
12. Compatibility Reality: The Client Is King
Final meta-lesson:
- MCP is evolving fast.
- Clients implement subsets.
- Servers must stay conservative.
Build for what clients actually accept today, not what the spec might allow tomorrow.
One of the most surprising moments for me wasn’t protocol-related at all.
It was behavioral.
My first realistic tool was get_rate(ccy1, ccy2) backed by a simple REST API.
Internally, I only maintained a couple of currency pairs:
Then I tested this prompt:
What’s today’s rate between the rupee and the dollar?
Notice: I never said INR or USD.
Yet the agent inferred the tool arguments correctly and invoked:
{ "ccy1": "INR", "ccy2": "USD" }
In another test, I even asked casually in Hindi:
“Rupee aur dollar ka aaj ka rate kya hai?”
And again, it mapped “rupee” and “dollar” into the correct ISO codes.
Even more interesting: when the first call didn’t succeed, the agent tried again with the reverse pair:
Later, when I asked for INR → SGD, it struggled — because my backend only supported USD-based pairs.
Eventually, with a hint, it reasoned and converted using the ‘through-currency’:
INR → USD → SGD
That was the first time I truly experienced the agentic loop:
- trial
- correction
- exploration
- tool composition
In another test, I chained multiple tools together in a more realistic workflow: fetching accounts via an account_fetch tool, converting balances into INR using get_rate, checking which accounts fell below a minimum balance threshold, and finally triggering an email_tool to notify the account holder — essentially the kind of end-to-end task a human relationship manager would do manually.
That was the moment MCP started feeling less like “tool invocation”… and more like delegating real operational work to an agent.
…and started feeling like an intelligent system discovering how to use capabilities.
14. Production MCP Lives Inside Enterprise Guardrails
One thing that becomes obvious the moment you build a real MCP server is this:
Tool calling is not a toy problem.
As soon as agents can invoke real integrations — start batch jobs, trigger workflows, query customer systems — security becomes the main concern.
The good news is: MCP does not require reinventing anything.
All the security patterns we have used for decades still apply:
- Bearer tokens
- OAuth2 / OIDC
- API gateways
- mTLS
- RBAC
- Request auditing
In fact, MCP makes auditing even more important.
Because now you don’t just need to know:
“Who called this API?”
You need to know:
- Which agent invoked the tool?
- On behalf of which user?
- With what parameters?
- Was the call allowed?
- Should it be logged and reviewed?
Agentic systems amplify capability — and therefore amplify responsibility.
The protocol may be simple, but production MCP must sit inside the same governance frameworks we already trust.
Closing: MCP 101 Is Reading — MCP 201 Is Building
MCP isn’t hard.
But MCP is exact.
And the only way to truly understand MCP is to build one.
Because MCP becomes real only when your server meets a real client.
That’s where MCP 201 begins.