Most AI projects fail. Yours doesn’t have to.
Reserve your spot today and get a production-ready Agent Blueprint in just 3 weeks
6
spots‍
‍available
Register for Your Agent Blueprint
About
Capabilities
Custom AgentsReliable RAGCustom Software DevelopmentEval Driven DevelopmentObservability
LangChainCase StudiesFocused Lab
Contact us
Back
LangChain

Stop Including JSON in Your Prompts

See how LangChain v1 handles structured outputs without requiring JSON in prompts, using schema-first, provider-native support for more reliable and typed responses.

Dec 1, 2025

By
Michael Steichen
Share:

A practical guide to structured outputs in LangChain v1

If your prompts still include “Please return your answer as JSON”, it’s time to stop.

tl;dr LangChain v1 refines and standardizes with_structured_output so it works consistently across providers that support it and falls back to tool calling elsewhere. It’s now built directly into the agent loop via the response_format parameter.

In most cases, you no longer tell the model to output JSON. Instead, show it a schema and let the runtime enforce it.

As an agent engineer, this is a relief: prompts stay focused on intent, and types handle structured output.

The Old Way: Prompt-Engineered JSON

A common approach looked like this:

‍

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4.1-nano")
result = llm.invoke(
"""
Should I take my cat on a walk? Your response must be JSON in the following format:
  {
      "is_good_idea": boolean,
      "reason": string
  }
"""
)

‍

This (mostly) worked. I hoped the model didn’t add text before the JSON ("Great insight!"). I  hoped that json.loads() didn’t need a regex bandaid. And I hoped to rememer to adjust so that the schema matched my code (it rarely did).

There’s now a cleaner, more reliable way to accomplish this.

The New Way: Schema-First, Provider-Native

with_structured_output() — for direct model calls

‍

from pydantic import BaseModel
from langchain_openai import ChatOpenAI

class Decision(BaseModel):
    is_good_idea: bool
    reason: str

  ...

llm = ChatOpenAI(model="gpt-4.1-nano")
structured = llm.with_structured_output(Decision)
result = structured.invoke("Should I take my cat on a walk?")

‍‍

This returns a typed Decision object, not a string. Under the hood, LangChain automatically picks the right mechanism (OpenAI tool call, Anthropic function call, JSON mode, etc.).

Note: with_structured_output isn’t new—it existed before LangChain v1. What’s new is tighter integration and consistent behavior across providers and the agent loop, making it the paved road for structured outputs.

What follows is new in v1:

create_agent(..., response_format=...) — for agents

‍

from langchain.agents import create_agent
from pydantic import BaseModel

class Decision(BaseModel):
    is_good_idea: bool
    reason: str

agent = create_agent(
    model="gpt-4.1-nano",
    response_format=Decision,
)
result = agent.invoke(
    {

        "messages": [
            {
                "role": "user",
                "content": "Should I take my cat on a walk?",
            }
        ]
    }
)

‍

Here, you again receive a typed Decision object, generated directly inside the agent loop.

LangChain also supports two explicit strategies for agents: ToolStrategy (forces tool-calling) and ProviderStrategy (uses provider-native structured output). In most cases, simply passing the schema (response_format=Decision) lets LangChain automatically pick the best option.

Does this mean you can’t include JSON in your prompts? Not at all—you just rarely need to unless:

  1. You’re on a model without structured-output support (some open-weights models),
  2. You need partial streaming of raw text, or
  3. You intentionally want looser, generative fields.

Enjoy cleaner prompts and type-safety! More langchain v1 updates to come!

Your message has been sent!

We’ll be in touch soon. In the mean time check out our case studies.

See all projects
/Contact Us

Let's Build better Agents Together

Modernize your legacy with Focused

Get in touch
Focused

433 W Van Buren St Suite 1100-C
Chicago, IL 60607
‍work@focused.io
‍
(708) 303-8088

‍

About
Leadership
Capabilities
Case Studies
Focused Lab
Careers
Contact
© 2026 Focused. All rights reserved.
Privacy Policy
Most AI projects fail. Yours doesn’t have to.
Reserve your spot today and get a production-ready Agent Blueprint in just 3 weeks
6
spots‍
‍available
Register for Your Agent Blueprint