Your First MCP Client
Understanding How the MCP Client Works
Let's see, step by step, how the MCP client works and how it connects the OpenAI model to the MCP server.
Step 1: Establishing the Connection
The code starts by opening a "pipe" to the server using the streamable HTTP transport.
async with streamable_http_client(url) as (read, write, _):
async with ClientSession(read, write) as session:
await session.initialize()
The initialize() step is the initial handshake. It works like dialing a phone number where our client and the server "say hi" to each other and agree on which version of the protocol they are speaking.
Step 2: Discovery and Translation
The LLM (GPT-5-mini) has no idea what tools our server has. The client has to "introduce" them using 2 steps:
Step 1 - Discovery:
session.list_tools()asks the server, "What can you do?"Step 2 - Translation: The code loops through the results and converts them into the specific format OpenAI understands (Function Calling format). It maps MCP's
inputSchemadirectly to OpenAI'sparameters.
Step 3: The First Question
The client sends the question of the user and the list of translated tools to OpenAI and asks: "This is the question and these are the tools you can use, how would you answer the question?"
response = openai.chat.completions.create(
model=model, messages=messages, tools=tools
)
At this point, OpenAI doesn't give an answer yet. Instead, it looks at the question, looks at the tools, and says: "I need to use tool 'X' with arguments 'Y' to answer this."
Practical MCP with FastMCP & LangChain
Engineering the Agentic ExperienceEnroll now to unlock current content and receive all future updates for free. Your purchase supports the author and fuels the creation of more exciting content. Act fast, as the price will rise as the course nears completion!
