Feedback

Chat Icon

Practical MCP with FastMCP & LangChain

Engineering the Agentic Experience

Your First MCP Client
30%

Overview of Function Calling

We're going to prototype a simple MCP client that can communicate with our server. In this part, we need the following components:

  • The MCP client.
  • The MCP server.
  • An AI model.
  • An application layer that connects the model to the MCP client and server.

We already have the server, all the other components are not implemented yet but we will create them.

For the model, we can use any of the available models from Hugging Face, Anthropic, OpenAI, etc. For this application, we'll choose OpenAI GPT models.

Before diving into the code, let's understand some basic concepts about how LLMs like OpenAI's GPT models can call tools. First of all, if you're not familiar with OpenAI's SDK, I highly recommend checking out their documentation and examples. Here is a quick introduction to how we can chat with a model.

If you want to follow along with the code examples, make sure you get an OpenAI API key from platform.openai.com.

Expose your API key as an environment variable:

# Add your OpenAI API key to your environment variables
cat <> $HOME/.bashrc
export OPENAI_API_KEY=""
EOF

# Re-source your bashrc to load the new environment variable
source $HOME/.bashrc

Create a new folder for our tests and install the OpenAI Python client:

mkdir -p $HOME/workspace/openai-tests
cd $HOME/workspace/openai-tests

# Create a new uv project with Python 3.12
uv init --bare --python 3.12

# Add the OpenAI client as a dependency
uv add openai==2.21.0

Now, we can create a simple script to test the OpenAI API. Our goal here is to quickly understand how to call a model, how to send messages, and how to use the function calling feature that allows the model to call external tools.

Start by running a simple chat completion request:

cat <<EOF > test_openai.py
from openai import OpenAI
import os

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

client = OpenAI(api_key=OPENAI_API_KEY)

response = client.chat.completions.create(
    model="gpt-5-mini",
    messages=[
        {
            "role": "user", 
            "content": "What is the sum of 1 and 2?"
        }
    ]
)
print(response.choices[0].message.content)
EOF

Run the script:

uv run python test_openai.py

To add a system prompt, you can include a message with the role "system":

cat <<EOF > test_openai.py
from openai import OpenAI
import os

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
client = OpenAI(api_key=OPENAI_API_KEY)

response = client.chat.completions.create(
    model="gpt-5-mini",
    messages=[
        {
            "role": "system", 
            "content": "You are a helpful assistant that can perform calculations "
            "and always respond in valid JSON format with a 'content' field that "
            "contains the answer to the user's question."
        },
        {
            "role": "user", 
            "content": "What is the sum of 1 and 2?"
        }
    ]
)
print(response.choices[0]

Practical MCP with FastMCP & LangChain

Engineering the Agentic Experience

Enroll now to unlock current content and receive all future updates for free. Your purchase supports the author and fuels the creation of more exciting content. Act fast, as the price will rise as the course nears completion!