Feedback

Chat Icon

Practical MCP with FastMCP & LangChain

Engineering the Agentic Experience

Integrating Agents with MCP: Function Calling Agents
90%

Building a Function Calling Agent

Now we'll build an agent that can actually do things. The agent in this chapter connects to two real public APIs — Open-Meteo for temperature and air quality data — and lets you ask questions like "What's the air quality in Tokyo?" in plain English. Two new ideas are introduced here on top of what we built in the previous chapter: tools and human-in-the-loop approval.

A tool, in LangChain, is an ordinary Python function decorated with @tool. That decorator is all LangChain needs to generate a JSON schema describing the function's inputs, which it then passes to the model so the model can decide when and how to call it.

The human-in-the-loop (HITL) is a middleware that pauses the agent before it executes each tool call and asks for your approval. This is useful during development and for any action with real-world side effects.

Step 1: Create the Project

mkdir -p $HOME/workspace/langchain/langchain_agent_with_tools
cd $HOME/workspace/langchain/langchain_agent_with_tools

uv init --bare --python 3.12

Step 2: Install Dependencies

Compared to the previous chapter we add httpx, an HTTP client we will use to call the weather and geocoding APIs.

uv add \
    "httpx==0.28.1" \
    "langchain==1.2.10" \
    "langchain-openai==1.1.10" \
    "langgraph==1.0.9" \
    "python-dotenv==1.2.1"

Step 3: Add Your API Key

cat > .env <
OPENAI_API_KEY=your_openai_api_key_here
EOF

Step 4: Write the Agent

Create a new file called agent.py. We will build it up sub-section by sub-section. The complete file is shown at the end of this step.

Imports

import os

import httpx
from dotenv import load_dotenv
from langchain.agents import create_agent
from langchain.agents.middleware import HumanInTheLoopMiddleware
from langchain.agents.middleware import SummarizationMiddleware
from langchain.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from langgraph.types import Command

There are three imports here that did not appear in the previous chapter.

  • httpx is an HTTP client used to call the public weather APIs.

  • tool is a decorator that turns an ordinary Python function into a LangChain tool — the decorator inspects the function's type annotations and docstring to build the JSON schema that tells the model what arguments the tool expects.

  • HumanInTheLoopMiddleware pauses the agent before executing any tool call so a human can approve it.

  • Command is a LangGraph type used to resume an interrupted agent with a decision.

Load Configuration

load_dotenv()

LLM = os.getenv("LLM", "gpt-5-mini")

Same pattern as before: load_dotenv() reads OPENAI_API_KEY from .env into os.environ, and LLM lets you switch models without touching the source code.

Define a Coordinate Helper

Both tools need latitude and longitude before they can call their respective APIs. Rather than duplicating that logic, we extract it into a private helper.

We're going touse open-meteo.com's free geocoding API.

def _get_coordinates(location: str) -> tuple[float, float]:
    """Resolve a place name to (latitude, longitude) via the Open-Meteo Geocoding API."""
    response = httpx.get(
        "https://geocoding-api.open-meteo.com/v1/search",
        params={"name": location, "count": 1, "language": "en", "format": "json"},
    )
    data = response.json()
    if "results" in data and len(data["results"]) > 0:
        latitude = data["results"][0]["latitude"]
        longitude = data["results"][0]["longitude"]
        return latitude, longitude
    else:
        raise ValueError(f"Could not find coordinates for location: {location}")

Define the Tools

Now declare the two tools the agent will be able to call. The @tool decorator does the heavy lifting: it reads the function's name, docstring, and parameter annotations to generate the schema that the model receives and uses to decide when to invoke this tool.

Both funtions follow the same pattern: they take a location string, call the geocoding API to get coordinates, then call the appropriate Open-Meteo API to get either air quality or temperature data.

@tool
def get_air_quality(location: str) -> str:
    """Get air quality information based on a location."""
    latitude, longitude = _get_coordinates(location)
    response = httpx.get(
        "https://air-quality-api.open-meteo.com/v1/air-quality",
        params={
            "latitude": latitude,
            "longitude": longitude,
            "hourly": "pm10,pm2_5",
            "forecast_days": 1,
        },
    )
    data = response.json()
    if "hourly" in data and "pm10" in data["hourly"] and "pm2_5" in data["hourly"]:
        pm10 = data["hourly"]["pm10"][0]    # index 0 = current hour
        pm2_5 = data["hourly"]["pm2_5"][0]
        result = f"PM10: {pm10} ÎĽg/mÂł, PM2.5: {pm2_5} ÎĽg/mÂł"
    else:
        result = "Air quality data not available"
    return f"Air quality in {location}: {result}"


@tool
def get_temperature(location: str) -> str:
    """Get the current temperature for a location."""
    latitude, longitude = _get_coordinates(location)
    response = httpx.get(
        "https://api.open-meteo.com/v1/forecast",
        params={
            "latitude": latitude,
            "longitude": longitude,
            "hourly": "temperature_2m",
            "forecast_days": 1,
        },
    )
    data = response.json()
    if "hourly" in data and "temperature_2m" in data["hourly"]:
        temperature = data["hourly"]["temperature_2m"][0]   # index 0 = current hour
        result = f"Temperature: {temperature} °C"
    else:
        result = "Temperature data not available"
    return f"Temperature in {location}: {result}"

Same as FastMCP tools, the docstrings are not just documentation — the model reads them to decide which tool to call for a given user request. A clear, accurate docstring is the single most important thing you can do to make a tool work reliably.

Create the Agent

This is the same create_agent call from the previous chapter, extended with tools and an additional middleware entry.

agent = create_agent(
    f"openai:{LLM}",
    tools=[get_air_quality, get_temperature],
    checkpointer=MemorySaver(),
    middleware=[
        # Compress old messages once the conversation exceeds 1000 tokens.
        SummarizationMiddleware(model=f"openai:{LLM}", trigger=("tokens", 1000)),
        # Pause before every tool call and wait for human approval.
        HumanInTheLoopMiddleware(
            interrupt_on={
                # "approve" is the only allowed decision — the user cannot
                # edit arguments or reject the call in this simple setup.
                "get_air_quality": {"allowed_decisions": ["approve"]},
                "get_temperature": {"allowed_decisions": ["approve"]},
            }
        ),
    ],
)

The tools argument registers the two decorated functions with the agent. From this point on, whenever the model decides it needs weather or air quality data, it emits a tool-call request and the framework routes it to the right function.

The HumanInTheLoopMiddleware intercepts that request before the function actually runs. It raises a LangGraph interrupt, which suspends the graph and gives control back to our code so we can ask the user for approval.

Write the Approval Helper

When the agent pauses for approval, agent.get_state() exposes the pending interrupts. The helper below reads them, prints the tool name, waits for Enter, and returns the list of approval decisions the graph needs to resume.

def _approve_tool_calls(hitl_request: dict) -> list[dict]:
    """Print each pending tool call and wait for the user to approve."""
    for tool_call in hitl_request["action_requests"]:
        print(f"Tool requested: {tool_call['name']}")
        input("Press Enter to approve... ")
    # One {"type": "approve"} dict is required per pending tool call.
    # If the model requested two tools at once we return two approvals.
    return [{"type": "approve"}] * len(hitl_request["action_requests"])

Write the Conversation Loop

The loop here is more involved than in the previous chapter because the agent can pause mid-turn for HITL approval. Each user message may require one or more approve-and-resume cycles before the agent produces a final answer, so we use an inner while True that keeps running until no pending interrupts remain.

def main() -> None:
    print("Weather & Air Quality Agent. Type 'exit' or 'quit' to stop.\n")

    while True:
        try:
            user_input = input("You: ").strip()
        except (EOFError, KeyboardInterrupt):
            print("\nGoodbye!")
            break

        if not user_input:
            continue

        if user_input.lower() in {"exit", "quit"}:
            print("Goodbye!")
            break

        config = {"configurable": {"thread_id": "default"}}

        # On the first pass, inputs is the new user message.
        # After an approval, inputs becomes a Command that resumes the graph.
        inputs: dict | Command = {"messages": [{"role": "user", "content": user_input}]}

        while True:
            agent.invoke(inputs, config=config)

            # Read the current graph state from the checkpointer.
            state = agent.get_state(config)

            # Collect interrupts across all pending tasks.
            pending_interrupts = [
                interrupt
                for task in state.tasks
                for interrupt in task.interrupts
            ]

            if pending_interrupts:
                # The agent is paused — ask the user to approve and resume.
                approved_decisions = _approve_tool_calls(pending_interrupts[0

Practical MCP with FastMCP & LangChain

Engineering the Agentic Experience

Enroll now to unlock current content and receive all future updates for free. Your purchase supports the author and fuels the creation of more exciting content. Act fast, as the price will rise as the course nears completion!