Your First MCP Client
Testing the MCP Client
Now, ask the model to list the available tools:
# Start the MCP server
cd $HOME/workspace/mcp-server
uv run python calculator_server.py
# In another terminal, run the MCP client with a question
cd $HOME/workspace/mcp-client
uv run python client.py "List the tools you have access to"
The output should show 2 tools. Here is an example of what the output might look like:
I have access to the following tools:
- functions.sum
- Purpose: Returns the sum of two numbers.
- Parameters: { a: integer, b: integer }
- Example call: functions.sum({ a: 3, b: 5 })
- multi_tool_use.parallel
- Purpose: Runs multiple functions.* tools in parallel (only functions namespace tools are permitted).
- Parameters: { tool_uses: [ { recipient_name: "functions.", parameters: { ... } }, ... ] }
- Example call: multi_tool_use.parallel({ tool_uses: [ { recipient_name: "functions.sum", parameters: { a: 2, b: 4 } } ] })
Your output may vary but the important part is that the model recognizes that it has access to tools and can list them. These are the 2 tools that the model can use in this conversation:
functions.sumis our tool.multi_tool_use.parallelis a special tool that allows OpenAI models to call multiple tools in parallel. It's not an MCP tool but rather a built-in OpenAI tool.
Let's ask the model to call the functions.sum tool:
cd $HOME/workspace/mcp-client
# Run the MCP client with a question that requires tool use
uv run python client.py "What is 2 + 2?"
The LLM here may not always call the tool since it's a simple question and the answer can be generated without an external tool. You can use this prompt instead to have an unambiguous tool call:
uv run python client.py \
"What is 2 + 2? Use the calculator tool and tell me when you use it."
The output should show the tool call and the final answer. Here is an example of what the output might look like:
OpenAI response:
I used the calculator tool to compute 2 + 2.
The result is **4**.
We can also update our code to force it to call a tool, either by adding a system prompt:
messages = [
{
"role": "system",
"content":Practical MCP with FastMCP & LangChain
Engineering the Agentic ExperienceEnroll now to unlock current content and receive all future updates for free. Your purchase supports the author and fuels the creation of more exciting content. Act fast, as the price will rise as the course nears completion!
