Rebuild with a Framework
You now have a working agent built from a raw loop. It works, but you've probably noticed some friction: managing message history, handling multi-step tool sequences, retrying failed calls, and tracking state all required manual code. Frameworks exist to absorb that friction.
In this lesson, we'll rebuild the same agent using LangGraph, the default framework for this curriculum. The goal isn't to learn LangGraph deeply (that comes in Module 7). It's to see what a framework gives you, what it hides, and how to make an informed choice about when to use one.
What you'll learn
- Rebuild your raw agent as a LangGraph graph with the same three tools
- Compare what was easier by hand vs what's easier in the framework
- Identify what the framework abstracts away (and why that matters for debugging)
- Make an informed decision about when to use a framework vs a raw loop
Concepts
Agent framework: a library that provides the control loop, state management, and tool execution patterns so you don't build them from scratch. Frameworks reduce boilerplate but add a layer of abstraction that can make debugging harder. The portable concept underneath: agent = model + tools + control loop. Every framework implements this differently, but the core pattern is the same.
Graph-based orchestration: an approach where agent behavior is defined as a graph of nodes (steps) and edges (transitions). LangGraph uses this model. Each node does one thing (call the model, execute a tool, check a condition), and edges define what happens next. This is more structured than a while loop but more flexible than a fixed pipeline.
State: the data that persists across steps in the agent's execution. In your raw loop, state was the messages list. In a framework, state is usually a typed object that the framework manages: passing it between nodes, persisting it across invocations, and making it available for inspection.
Default: LangGraph
Why this is the default: It teaches graph-based orchestration, has strong Python support, and gives you a path into multi-agent patterns later (Module 7). It's also widely adopted, so you'll encounter it in real projects.
Portable concept underneath: Graph-based orchestration separates "what the agent does" (nodes) from "what happens next" (edges). Any framework that does this gives you the same conceptual foundation.
Closest alternatives and when to switch:
- OpenAI Agents SDK: Use when you want multi-agent handoffs, built-in guardrails, and tracing in an OpenAI-centered workflow. Emphasizes orchestration across multiple agents.
- Claude Agent SDK: Use when you want a single powerful agent with built-in tool execution (file I/O, shell, web), lifecycle hooks, and first-class MCP integration. Emphasizes autonomous single-agent capability in a Claude-centered workflow.
- PydanticAI: Use when type safety and Python ergonomics matter more than graph-style orchestration.
- LlamaIndex workflows: Use when documents and data pipelines are the center of gravity, not tool calling.
- No framework: Keep the raw loop when your agent is simple enough that a framework would add complexity without benefit.
Walkthrough
Install LangGraph
cd anchor-repo
source .venv/bin/activate
pip install langchain langgraph langchain-openaiRebuild the agent as a graph
Create a LangGraph version of your agent that uses the same tools:
# agent/graph_agent.py
"""LangGraph version of the tool-calling agent."""
import sys
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
from agent.tools_langchain import list_files_tool, search_text_tool, read_file_tool
model = ChatOpenAI(model="gpt-4o-mini", temperature=0)
agent = create_react_agent(
model,
tools=[list_files_tool, search_text_tool, read_file_tool],
)
def run_agent(question: str, verbose: bool = True) -> dict:
"""Run one question through the LangGraph-based agent.
Args:
question: User question the agent should answer about the repository.
verbose: Whether to print a short execution summary after the run.
Returns:
dict: Final answer text, summarized tool activity, and message-count metadata.
"""
result = agent.invoke(
{"messages": [{"role": "user", "content": question}]},
)
messages = result["messages"]
tool_calls = [
{"tool": m.name, "content_preview": m.content[:200]}
for m in messages
if hasattr(m, "name") and m.name
]
final = messages[-1].content if messages else "No answer produced"
if verbose:
print(f" [{len(messages)} messages, {len(tool_calls)} tool calls]")
return {
"answer": final,
"tool_calls": tool_calls,
"message_count": len(messages),
}
if __name__ == "__main__":
question = sys.argv[1] if len(sys.argv) > 1 else "What are the main modules in this repository?"
print(f"Question: {question}\n")
result = run_agent(question)
print(f"\nAnswer:\n{result['answer']}")You'll also need to wrap your tool functions for LangChain compatibility:
# agent/tools_langchain.py
"""LangChain-compatible wrappers for the repo tools."""
from langchain_core.tools import tool
from agent.tools import list_files, search_text, read_file
@tool
def list_files_tool(glob_pattern: str = "**/*") -> str:
"""Expose the repository file-listing helper as a LangChain tool.
Args:
glob_pattern: Glob pattern used to filter repository files.
Returns:
str: Newline-delimited file matches or a short status message.
"""
return list_files(glob_pattern)
@tool
def search_text_tool(query: str, glob_pattern: str = None) -> str:
"""Expose repository text search as a LangChain tool.
Args:
query: Text to search for in repository files.
glob_pattern: Optional file glob that narrows the search scope.
Returns:
str: Matching lines with file/line context or a short status message.
"""
return search_text(query, glob_pattern)
@tool
def read_file_tool(path: str, start_line: int = None, end_line: int = None) -> str:
"""Expose repo file reading as a LangChain tool.
Args:
path: Repo-relative file path to read.
start_line: Optional 1-based line number to start from.
end_line: Optional inclusive line number to stop at.
Returns:
str: File contents or a short error/status message.
"""
return read_file(path, start_line, end_line)Run it:
python -m agent.graph_agent "What are the main modules in this repository?"Compare: raw loop vs framework
Run the same 3-5 questions through both versions and compare:
| Dimension | Raw loop | LangGraph |
|---|---|---|
| Setup effort | More code, but you understand every line | Less code, but you need to learn the framework's conventions |
| State management | You manage the messages list manually | The framework manages state; you access it through the graph |
| Tool execution | You dispatch tool calls and append results yourself | The framework handles dispatching and result injection |
| Debugging | Print statements in your loop; you see everything | Framework logs and traces; you need to know where to look |
| Error handling | You write it | Framework provides some; you customize the rest |
| Extensibility | Add more code to the loop | Add more nodes to the graph |
The key question isn't "which is better?" It's "which tradeoff fits your current situation?" For a simple agent with a few tools, the raw loop is often more maintainable. For agents with branching logic, retries, human-in-the-loop steps, or multi-agent coordination, a framework starts earning its complexity.
Exercises
- Build the LangGraph agent with the same three tools. Run the same questions you tested with the raw loop and compare the answers.
- Add a fourth tool to both versions (e.g.,
git_logthat shows recent commits). Note how the effort differs between raw loop and framework. - Deliberately trigger an error (e.g., read a file that doesn't exist). Compare how each version handles the error and how easy it is to debug.
- Write a brief comparison note: what was easier by hand? What got easier in the framework? What got more hidden?
Completion checkpoint
You have:
- A working LangGraph agent with the same three tools as your raw loop
- A side-by-side comparison of the raw loop and framework on the same questions
- A written comparison noting what each approach does well and where each struggles
- An informed opinion about when you'd choose one over the other
What's next
Building an MCP Server. The agent works, but its tools are still trapped inside one runtime. The next lesson makes them portable.
References
Start here
- LangChain agents — the current starting point for agent creation with LangChain/LangGraph
Build with this
- LangGraph overview — graph-based orchestration concepts and patterns
- PydanticAI docs — alternative framework if type safety is your priority
Deep dive
- OpenAI Agents SDK — OpenAI's agent framework with multi-agent handoffs, guardrails, and tracing
- Claude Agent SDK — Anthropic's agent SDK with built-in tool execution, lifecycle hooks, and MCP integration
- Anthropic: Building effective agents — framework-agnostic agent design principles