Integrating Agents with MCP: Introduction to LangChain
What is LangChain?
LangChain is an open-source framework for building LLM-powered applications, launched in October 2022. Instead of writing raw SDK calls everywhere, it gives you standardized building blocks — prompts, models, tools, retrievers — so you can swap providers, add middleware, and wire up tool use without rewriting your application each time the ecosystem shifts.
The Package Ecosystem
The Python ecosystem is intentionally split into multiple packages with distinct responsibilities:
langchain-coreis the foundation. It defines the base abstractions (including the Runnable system) that other packages build on, and is designed to stay modular and stable.langchainis the main framework package. As of v1.0 it has been significantly streamlined to focus on the core agent loop and high-level APIs.langchain-communityholds third-party and community integrations that used to live in the main package. This separation lets integrations evolve on their own cadence without churning the core framework.Provider packages like
langchain-openaiandlangchain-anthropicare installed separately, so you can upgrade a single connector without being forced into a full framework upgrade.LangGraph is a separate but closely related project focused on stateful, graph-based orchestration. It is not a core dependency of LangChain for basic chains or Runnable composition. However, modern LangChain agent APIs are implemented on top of LangGraph’s runtime, so when you build agents through LangChain’s high-level helpers, LangGraph powers the execution model behind the scenes.
The Runnable Interface
The central design idea in modern LangChain is the Runnable interface. Components — models, prompt templates, retrievers, output parsers — share a common execution surface so they compose cleanly using the | pipe operator (LCEL). Runnables support synchronous and async execution, batching, and streaming out of the box.
Example: Composing Runnables:
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
# Define a prompt from a template
prompt = ChatPromptTemplate.from_template("Summarize this in one sentence: {text}")
# Create a model instance
model = ChatOpenAI(model="gpt-4o-mini")
# Define an output parser
parser = StrOutputParser()
# Compose them into a chain
chain = prompt | model | parser
# Invoke the chain with input
result = chain.invoke({"text": "The quick brown fox jumped over the lazy dog."})Practical MCP with FastMCP & LangChain
Engineering the Agentic ExperienceEnroll now to unlock current content and receive all future updates for free. Your purchase supports the author and fuels the creation of more exciting content. Act fast, as the price will rise as the course nears completion!
