Building a Functional MCP Client
The Client Code
We will not go into the details of the code here since we have seen most of the patterns before in the previous chapters, except for the new features that we are going to implement. The full code is available in the compantion kit.
Our client will start by configuring its logging then loading and reading the configuration from the .env file:
logging.basicConfig(level=logging.WARNING, format="%(message)s")
logging.getLogger("mcp.server").setLevel(logging.INFO)
load_dotenv()
# Read configuration from environment
MCP_SERVER_URL = os.getenv("MCP_SERVER_URL", "http://localhost:8000/mcp")
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") # Your OpenAI API key
MODEL = os.getenv("OPENAI_MODEL", "gpt-5-mini") # Model for chat
MAX_HISTORY = int(os.getenv("MAX_HISTORY", "10"))
SYSTEM_PROMPT = os.getenv("SYSTEM_PROMPT")
MCP_SERVER_URL: The URL of the MCP server that we will be interacting withOPENAI_API_KEY: The API key for OpenAI to call the LLMMODEL: The specific model we want to use for our chat interactionsMAX_HISTORY: The maximum number of previous interactions to keep in the context for the LLM. Since we're going to launch an interactive chat on the client side, we need to keep track of the conversation history and send it to the LLM for better responses. This variable controls how many previous interactions we want to include in the context.SYSTEM_PROMPT: The initial system prompt that sets the behavior of the assistant (the LLM).
Next, we will set up the MCP client and configure the handlers:
from handlers.elicitation import elicitation_handler
from handlers.logging import log_handler
from handlers.progress import progress_handler
from handlers.sampling import sampling_handler
mcp_client = Client(
MCP_SERVER_URL,
# These handlers respond to server requests:
elicitation_handler=elicitation_handler,
sampling_handler=sampling_handler, # When server asks for user input
log_handler=log_handler, # When server sends logs
progress_handler=progress_handler, # When server sends progress updates
)
We add the main function of the client repl (Read-Eval-Print Loop):
async def run_repl():
openai_client = OpenAI(api_key=OPENAI_API_KEY)
messages = [{"role": "system", "content": SYSTEM_PROMPT}]
async with mcp_client:
# Get available tools once at startup
tools = await get_tools_for_openai(mcp_client)
print("Try asking: 'How old is my 5-year-old labrador in human years?'")
# The REPL loop
while True:
try:
user_input = input("Ask PuppyGuide> ").strip()
print("[Thinking...]")
answer = await chat(user_input, openai_client, tools, messages)
print("[Assistant]:", answer)
if len(messages) > MAX_HISTORY + 1:
messages = [messages[0]]
except KeyboardInterrupt:
print("Goodbye!")
break
The code above initializes the OpenAI client, sets up the conversation history with a system prompt, and starts an asynchronous context with the MCP client. It retrieves the available tools from the server once at startup and then enters a REPL loop where it continuously reads user input, generates a response using the chat function, and prints the answer. The conversation history is maintained and truncated to keep it within the specified limit.
There are two important functions that we haven't defined yet: get_tools_for_openai and chat.
The
get_tools_for_openaifunction retrieves the list of tools from the MCP server and formats them in a way that can be used by the OpenAI API.The
chatfunction takes the user input, sends it to the OpenAI API along with the conversation history and available tools, and processes the response to generate an answer.
Here they are:
async def get_tools_for_openai(client: Client) -> list:
mcp_tools = await client.list_tools()
openai_tools = []
for tool in mcp_tools:
openai_tools.append(
{
"type": "function",
"function": {
"name": tool.name,
"description": tool.description,
"parameters": tool.inputSchema,
},
}
)
return openai_tools
async def chat(
user_question: str, openai_client: OpenAI, tools: list, messages: list
) -> str:
messages.append({"role": "user", "content": user_question})
while True:
# Call OpenAI
response = openai_client.chat.completions.create(
model=MODEL,
messages=messages,
tools=tools,
tool_choice="auto", # Let OpenAI decide when to use tools
)
# Extract assistant's message (could contain tool calls)
assistant_message = response.choices[0].message
# Add to history
messages.append(assistant_message)
# If no tool calls, we have the final answer!
if not assistant_message.tool_calls:
return assistant_message.content or ""
# Process each tool call
for tool_call in assistant_message.tool_calls:
tool_name = tool_call.function.name
tool_args = json.loads(tool_call.function.arguments)
# Call the MCP tool
try:
result = await mcp_client.call_tool(tool_name, tool_args)
result_text = str(result)
except Exception as e:
result_text = f"Error: {e}"
# Add tool result to conversation
messages.append(
{
"role": "tool",
"tool_call_id": tool_call.id,
"name": tool_name,
"content": result_text,
}
)
You can run the following command (in a terminal) to write the full code of the client:
cd $HOME/workspace/puppy_guide/client
cat > main.py << 'EOF'
import asyncio
import json
importPractical MCP with FastMCP & LangChain
Engineering the Agentic ExperienceEnroll now to unlock current content and receive all future updates for free. Your purchase supports the author and fuels the creation of more exciting content. Act fast, as the price will rise as the course nears completion!
