Back
LangChain

Migrating Classic LangChain Agents to LangGraph a How To

It's time to replace the deprecated “initialize_agent” API with first‑class LangGraph nodes so you can cut latency and gain graph‑level control.

Jul 9, 2025

By Austin Vance

Share:
  • linkedin
  • facebook
  • instagram
  • twitter

Takeaway: You can swap a `legacy AgentExecutor` for a `LangGraph` node in a single commit. The payoff is lower overhead, deterministic routing, and native persistence.

WHY MIGRATE NOW?

LangChain announced that with LangChain 0.2 the original agent helpers (initialize_agent, AgentExecutor) are deprecated and will only receive critical fixes. LangChain recommends moving to LangGraph’s node‑based approach for better control flow, built‑in persistence, and the ability to use multi‑actor workflows.

WHAT CHANGED IN LANGCHAIN 0.2 AND LATER?

Legacy pattern (langchain < 0.2) versus current pattern (langchain 0.2 or newer):

 LegacyCurrent
Agent entry pointinitialize_agentgraph node created with LangGraph helpers
Configurationmany function kwargs that are hard to extendtyped graph state and composable nodes
PersistenceDIY pickling or a custom DBcheckpoint helpers built into LangGraph
Deprecation statushelpers are deprecatedLangGraph approach is the long‑term, fully supported path

CODE DIFF: FROM initialize_agent TO A LANGGRAPH NODE

Below is a minimal ReAct agent that calls a calculator tool—first the legacy way, then the LangGraph way.

Before (legacy agent)

    from langchain.agents import AgentExecutor, AgentType, initialize_agent, load_tools
from langchain.chat_models import ChatOpenAI
from langchain_core.tools import tool

@tool
def capitalize(text: str) -> str:
    """Capitalize the text."""
    return text.upper()

llm = ChatOpenAI(model="gpt-4o-mini")

agent = initialize_agent(
    [capitalize],
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
)

response = agent.run("can you capitalize this text: hello world")
print(response)
  

After (LangGraph)

    from langgraph.prebuilt import create_react_agent
from langgraph.graph import END, StateGraph
from langchain_openai import ChatOpenAI
from langchain.tools import Calculator

class State(TypedDict):
    messages: Annotated[list, add_messages]

@tool
def capitalize(text: str) -> str:
    """Capitalize the text."""
    print(f"Capitalizing text: {text}")
    return text.upper()

llm = ChatOpenAI(model="gpt-4o-mini")
tools = [capitalize]
  

Create the ReAct agent node

    agent_node = create_react_agent(llm, tools)
  

Build a simple single‑node graph

    graph = StateGraph(State)
graph.add_node("react_agent", agent_node)
graph.set_entry_point("react_agent")
graph.add_edge("react_agent", END)

agent_executor = graph.compile()

response = agent_executor.invoke({"messages": [{"role": "user", "content": "can you capitalize this text: hello world"}]})
print(response["messages"][-1].content)
  

The functional behavior is identical, but you gain an explicit state object, the ability to add router or guardrail nodes later without refactoring the agent itself, and full compatibility with LangGraph’s checkpoint and observability APIs.  

UPDATING TESTS AND CALLBACKS

Unit tests with Pytest and LangSmith

    from graph import agent_executor

def test_capitalize_text():
    result = agent_executor.invoke({
        "messages": [{"role": "user", "content": "can you capitalize this text: hello world"}]
    })

    assert "HELLO WORLD" in result["messages"][-1].content
  

For richer coverage, use LangSmith’s pytest plugin to log run trees and score outputs with metrics instead of brittle string matches.  

Callbacks If you previously passed callbacks into initialize_agent(..., callbacks=[StdOutCallbackHandler()]), move them to the graph compile step (or to individual nodes for fine‑grained tracing):

    from langchain_core.callbacks.base import BaseCallbackHandler
class PrintCallbackHandler(BaseCallbackHandler):
    def on_llm_start(self, serialized: dict[str, any], prompts: list[str], **kwargs: any) -> None:
        print(f"LLM start: {prompts}")
        
response = agent_executor.invoke({"messages": [{"role": "user", "content": "can you capitalize this text: hello world"}]}, {"callbacks": [PrintCallbackHandler()]})
  

PRODUCTION ROLLOUT CHECKLIST

  1. Freeze versions: pin langchain>=0.2,<0.3, then move to 0.3 and the latest stable langgraph.

  2. Refactor imports: search‑and‑replace initialize_agent( with create_react_agent(.

  3. Compile once: cache the compiled graph (agent_executor) at application start to avoid cold‑start overhead.

  4. State schema: define a TypedDict or Pydantic model for your graph state to catch breaking changes early.

  5. Health probes: invoke the graph with {“message”: {role: {“user”: “ping”}}} and expect "pong" so orchestration platforms detect failures.

  6. Checkpoint storage: configure S3, Redis, or SQLite persistence before rolling to production if your flows exceed a single request. 

  7. Observability: enable LANGCHAIN_TRACING_V2=true and send traces to LangSmith.

  8. Canary deploy: route a slice of traffic to the new executor and compare latency and error rate against the legacy path.

  9. Retire legacy code: delete deprecated agent imports when metrics hit parity.

  10. Document the graph: export a GraphViz diagram and commit it so new teammates can visualize the flow.

FINAL WORD

Upgrading to LangGraph is not a risky rewrite; it is a surgical swap that positions your agent for reliable scale, granular observability, and future multi‑actor magic.

Back to Explore Focused Lab
/Contact us

Let’s build better software together