MCP Interaction Workflow: A Step-by-Step Example
Step 5 - Tool Calling, Execution and Structured Output
At this stage, the LLM’s output is just a string of text. The host application intercepts this output before the user sees it. The host performs the following steps:
- Extraction: It could use a parsing mechanism to get the tool name and arguments from the LLM's output.
- Validation: It checks the
locationargument against the JSON Schema of theget_air_qualityprimitive to ensure it's a valid string. - Safety Check: The Host may prompt the user for permission (e.g.:
Allow Claude to call 'get_air_quality'?) or check internal security policies.
If the intent is valid, the MCP client takes that raw intent and wraps it into a formal JSON-RPC 2.0 message. This adds the necessary protocol overhead (ID, method, and version) required by the MCP specification:
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "get_air_quality",
"arguments": {
"location": "San Francisco"
}
},
"id": 1
}
The MCP client then sends this formatted packet over the established transport (such as stdio or HTTP) to the server. The server executes the underlying code and returns the result to the client:
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [
{
"type": "text",
"text": "AQI is 42 (Good) in San Francisco. PM2.5 is 8 ug/m3."
}
],
"isError": false
}
}
The client then passes it back to the LLM to formulate the final answer for the user.
MCP at this level is agnostic to what the tool actually does. The tool developer implements any logic they want behind the scenes. The MCP server just provides a standardized interface for the model to trigger that logic and get results back. So the parsing, validation, safety logic and any other logic around tool calling is the responsibility of the application (there are different frameworks and libraries that can help with this, but it's not dictated by the MCP protocol). Therefore, some of the processes described in this step are not strictly the same in every implementation, but the transport and the data structure of the request and response messages are standardized by MCP.
At this level, if the call fails due to a problem inside the tool itself (for example, invalid location format, upstream timeout, or internal exception), the server returns the error inside a normal result with isError set to true. This allows the AI model to reason about the failure based on the error message and decide how to proceed (for example, it could ask the user for a different location or try a different tool):
{
"jsonrpc"Practical MCP with FastMCP & LangChain
Engineering the Agentic ExperienceEnroll now to unlock current content and receive all future updates for free. Your purchase supports the author and fuels the creation of more exciting content. Act fast, as the price will rise as the course nears completion!
