langchain-middlewareद्वारा langchain-ai

Human-in-the-loop approval, custom middleware, and structured output patterns for LangChain agents. HumanInTheLoopMiddleware pauses execution before dangerous tool calls, allowing humans to approve, edit arguments, or reject with feedback Per-tool interrupt policies let you configure different approval rules based on risk level; requires a checkpointer and thread_id for state persistence Command resume pattern continues execution after human decisions, with support for editing tool arguments...

npx skills add https://github.com/langchain-ai/langchain-skills --skill langchain-middleware
Middleware patterns for production LangChain agents:
  • HumanInTheLoopMiddleware / humanInTheLoopMiddleware: Pause before dangerous tool calls for human approval
  • Custom middleware: Intercept tool calls for error handling, logging, retry logic
  • Command resume: Continue execution after human decisions (approve, edit, reject)

Requirements: Checkpointer + thread_id config for all HITL workflows.


Human-in-the-Loop

Set up an agent with HITL middleware that pauses before sending emails for approval. ```python from langchain.agents import create_agent from langchain.agents.middleware import HumanInTheLoopMiddleware from langgraph.checkpoint.memory import MemorySaver from langchain.tools import tool

@tool def send_email(to: str, subject: str, body: str) -> str: """Send an email.""" return f"Email sent to {to}"

agent = create_agent( model="gpt-4.1", tools=[send_email], checkpointer=MemorySaver(), # Required for HITL middleware=[ HumanInTheLoopMiddleware( interrupt_on={ "send_email": {"allowed_decisions": ["approve", "edit", "reject"]}, } ) ], )

</python>
<typescript>
Set up an agent with HITL that pauses before sending emails for human approval.
```typescript
import { createAgent, humanInTheLoopMiddleware } from "langchain";
import { MemorySaver } from "@langchain/langgraph";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const sendEmail = tool(
  async ({ to, subject, body }) => `Email sent to ${to}`,
  {
    name: "send_email",
    description: "Send an email",
    schema: z.object({ to: z.string(), subject: z.string(), body: z.string() }),
  }
);

const agent = createAgent({
  model: "anthropic:claude-sonnet-4-5",
  tools: [sendEmail],
  checkpointer: new MemorySaver(),
  middleware: [
    humanInTheLoopMiddleware({
      interruptOn: { send_email: { allowedDecisions: ["approve", "edit", "reject"] } },
    }),
  ],
});
Run the agent, detect an interrupt, then resume execution after human approval. ```python from langgraph.types import Command

config = {"configurable": {"thread_id": "session-1"}}

Step 1: Agent runs until it needs to call tool

result1 = agent.invoke({ "messages": [{"role": "user", "content": "Send email to [email protected]"}] }, config=config)

Check for interrupt

if "interrupt" in result1: print(f"Waiting for approval: {result1['interrupt']}")

Step 2: Human approves

result2 = agent.invoke( Command(resume={"decisions": [{"type": "approve"}]}), config=config )

</python>
<typescript>
Run the agent, detect an interrupt, then resume execution after human approval.
```typescript
import { Command } from "@langchain/langgraph";

const config = { configurable: { thread_id: "session-1" } };

// Step 1: Agent runs until it needs to call tool
const result1 = await agent.invoke({
  messages: [{ role: "user", content: "Send email to [email protected]" }]
}, config);

// Check for interrupt
if (result1.__interrupt__) {
  console.log(`Waiting for approval: ${result1.__interrupt__}`);
}

// Step 2: Human approves
const result2 = await agent.invoke(
  new Command({ resume: { decisions: [{ type: "approve" }] } }),
  config
);
Edit the tool arguments before approving when the original values need correction. ```python # Human edits the arguments — edited_action must include name + args result2 = agent.invoke( Command(resume={ "decisions": [{ "type": "edit", "edited_action": { "name": "send_email", "args": { "to": "[email protected]", # Fixed email "subject": "Project Meeting - Updated", "body": "...", }, }, }] }), config=config ) ``` Edit the tool arguments before approving when the original values need correction. ```typescript // Human edits the arguments — editedAction must include name + args const result2 = await agent.invoke( new Command({ resume: { decisions: [{ type: "edit", editedAction: { name: "send_email", args: { to: "[email protected]", // Fixed email subject: "Project Meeting - Updated", body: "...", }, }, }] } }), config ); ``` Reject a tool call and provide feedback explaining why it was rejected. ```python # Human rejects result2 = agent.invoke( Command(resume={ "decisions": [{ "type": "reject", "feedback": "Cannot delete customer data without manager approval", }] }), config=config ) ``` Configure different HITL policies for each tool based on risk level. ```python agent = create_agent( model="gpt-4.1", tools=[send_email, read_email, delete_email], checkpointer=MemorySaver(), middleware=[ HumanInTheLoopMiddleware( interrupt_on={ "send_email": {"allowed_decisions": ["approve", "edit", "reject"]}, "delete_email": {"allowed_decisions": ["approve", "reject"]}, # No edit "read_email": False, # No HITL for reading } ) ], ) ``` ### What You CAN Configure
  • Which tools require approval (per-tool policies)
  • Allowed decisions per tool (approve, edit, reject)
  • Custom middleware hooks: before_model, after_model, wrap_tool_call, before_agent, after_agent
  • Tool-specific middleware (apply only to certain tools)

Custom Middleware Hooks

Six decorator hooks are available. Two patterns:

  • Wrap hooks (wrap_tool_call, wrap_model_call): (request, handler) — call handler(request) to proceed, or return early to short-circuit.
  • Before/after hooks (before_model, after_model, before_agent, after_agent): (state, runtime) — inspect or modify state. Return None or a dict of state updates.
`@wrap_tool_call` intercepts tool execution. **Do NOT use `yield`** — it creates a generator and causes `NotImplementedError`.
from langchain.agents.middleware import wrap_tool_call

@wrap_tool_call
def retry_middleware(request, handler):
    for attempt in range(3):
        try:
            return handler(request)
        except Exception:
            if attempt == 2:
                raise

@wrap_tool_call
def guard_middleware(request, handler):
    if request.tool_call["name"] == "dangerous_tool":
        return "This tool is disabled"  # short-circuit
    return handler(request)
`createMiddleware({ wrapToolCall })` intercepts tool execution.
import { createMiddleware } from "langchain";

const retryMiddleware = createMiddleware({
  wrapToolCall: async (request, handler) => {
    for (let attempt = 0; attempt < 3; attempt++) {
      try { return await handler(request); }
      catch (e) { if (attempt === 2) throw e; }
    }
  },
});
`before_model` / `after_model` / `before_agent` / `after_agent` all share `(state, runtime)` signature.
from langchain.agents.middleware import before_model, after_model

@before_model
def log_calls(state, runtime):
    print(f"Calling model with {len(state['messages'])} messages")

@after_model
def check_output(state, runtime):
    print(f"Model responded")
All before/after hooks share the same `(state, runtime)` signature via `createMiddleware`.
import { createMiddleware } from "langchain";

const loggingMiddleware = createMiddleware({
  beforeModel: (state, runtime) => {
    console.log(`Calling model with ${state.messages.length} messages`);
  },
  afterModel: (state, runtime) => {
    console.log("Model responded");
  },
});
### What You CANNOT Configure
  • Interrupt after tool execution (must be before)
  • Skip checkpointer requirement for HITL
HITL middleware requires a checkpointer to persist state. ```python # WRONG agent = create_agent(model="gpt-4.1", tools=[send_email], middleware=[HumanInTheLoopMiddleware({...})])

CORRECT

agent = create_agent( model="gpt-4.1", tools=[send_email], checkpointer=MemorySaver(), # Required middleware=[HumanInTheLoopMiddleware({...})] )

</python>
<typescript>
HITL requires a checkpointer to persist state.
```typescript
// WRONG: No checkpointer
const agent = createAgent({
  model: "anthropic:claude-sonnet-4-5", tools: [sendEmail],
  middleware: [humanInTheLoopMiddleware({ interruptOn: { send_email: true } })],
});

// CORRECT: Add checkpointer
const agent = createAgent({
  model: "anthropic:claude-sonnet-4-5", tools: [sendEmail],
  checkpointer: new MemorySaver(),
  middleware: [humanInTheLoopMiddleware({ interruptOn: { send_email: true } })],
});
Always provide thread_id when using HITL to track conversation state. ```python # WRONG agent.invoke(input) # No config!

CORRECT

agent.invoke(input, config={"configurable": {"thread_id": "user-123"}})

</python>
</fix-no-thread-id>

<fix-wrong-resume-syntax>
<python>
Use Command class to resume execution after an interrupt.
```python
# WRONG
agent.invoke({"resume": {"decisions": [...]}})

# CORRECT
from langgraph.types import Command
agent.invoke(Command(resume={"decisions": [{"type": "approve"}]}), config=config)
Use Command class to resume execution after an interrupt. ```typescript // WRONG await agent.invoke({ resume: { decisions: [...] } });

// CORRECT import { Command } from "@langchain/langgraph"; await agent.invoke(new Command({ resume: { decisions: [{ type: "approve" }] } }), config);

</typescript>
</fix-wrong-resume-syntax>

langchain-ai की और Skills

arxiv-search
by langchain-ai
Search arXiv for preprints and academic papers by topic with abstract retrieval. Query-based search across physics, mathematics, computer science, biology, statistics, and related fields Configurable result limit (default 10 papers) with results sorted by relevance Returns title and abstract for each matching paper Requires the arxiv Python package; install via pip if not already present
blog-post
by langchain-ai
Long-form blog post writing with research delegation, structured content templates, and AI-generated cover images. Delegates research to subagents before writing, storing findings in markdown for reference and context Enforces a five-part post structure: hook, context, main content (3–5 sections), practical application, and conclusion with call-to-action Generates SEO-optimized cover images using detailed prompts covering subject, style, composition, color, and lighting Outputs posts to...
code-review
by langchain-ai
Perform a structured code review of changes, checking for correctness, style, tests, and potential issues.
coding-prefs
by langchain-ai
Read the user's coding preferences from /memory/coding-prefs.md before making non-trivial style decisions, and append new preferences when the user gives…
competitor-analysis
by langchain-ai
When asked to analyze competitors:
cudf-analytics
by langchain-ai
Use for GPU-accelerated data analysis on datasets, CSVs, or tabular data using NVIDIA cuDF. Triggers when tasks involve groupby aggregations, statistical…
cuml-machine-learning
by langchain-ai
Use for GPU-accelerated machine learning on tabular data using NVIDIA cuML. Triggers when tasks involve classification, regression, clustering, dimensionality…
data-visualization
by langchain-ai
Use for creating publication-quality charts and multi-panel analysis summaries. Triggers when tasks involve visualizing data, plotting results, creating…

NotebookLM Web Importer

एक क्लिक में वेब पेज और YouTube वीडियो NotebookLM में आयात करें। 200,000+ उपयोगकर्ताओं द्वारा विश्वसनीय।

Chrome एक्सटेंशन इंस्टॉल करें