Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the v0 LangChain Python or LangChain JavaScript docs.
Many AI applications interact with users via natural language. However, some use cases require models to interface directly with external systems—such as APIs, databases, or file systems—using structured input.Tools are components that agents call to perform actions. They extend model capabilities by letting them interact with the world through well-defined inputs and outputs. Tools encapsulate a callable function and its input schema. These can be passed to compatible chat models, allowing the model to decide whether to invoke a tool and with what arguments. In these scenarios, tool calling enables models to generate requests that conform to a specified input schema.
The simplest way to create a tool is by importing the tool function from the langchain package. You can use zod to define the tool’s input schema:
Copy
import { z } from "zod"import { tool } from "langchain"const searchDatabase = tool( ({ query, limit }) => { return `Found ${limit} results for '${query}'` }, { name: "search_database", description: "Search the customer database for records matching the query.", schema: z.object({ query: z.string().describe("Search terms to look for"), limit: z.number().describe("Maximum number of results to return") }) });
Alternatively, you can define the schema property as a JSON schema object:
Copy
const searchDatabase = tool( (input) => { const { query, limit } = input as { query: string; limit: number } return `Found ${limit} results for '${query}'` }, { name: "search_database", description: "Search the customer database for records matching the query.", schema: { type: "object", properties: { query: { type: "string", description: "Search terms to look for" }, limit: { type: "number", description: "Maximum number of results to return" } }, required: ["query", "limit"] } });
ToolNode is a prebuilt LangGraph component that handles tool calls within an agent’s workflow. It works seamlessly with createAgent(), offering advanced tool execution control, built in parallelism, and error handling.
ToolNode provides built-in error handling for tool execution through its handleToolErrors property.To customize the error handling behavior, you can configure handleToolErrors to either be a boolean or a custom error handler function:
true: Catch all errors and return a ToolMessage with the default error template containing the exception details. (default)
false: Disable error handling entirely, allowing exceptions to propagate.
((error: unknown, toolCall: ToolCall) => ToolMessage | undefined): Catch all errors and return a ToolMessage with the result of calling the function with the exception.
Examples of how to use the different error handling strategies:
Copy
const toolNode = new ToolNode([my_tool], { handleToolErrors: true})const toolNode = new ToolNode([my_tool], { handleToolErrors: (error, toolCall) => { return new ToolMessage({ content: "I encountered an issue. Please try rephrasing your request.", tool_call_id: toolCall.id }) }})
Pass a configured ToolNode directly to createAgent():
Copy
import { z } from "zod"import { ChatOpenAI } from "@langchain/openai"import { ToolNode, createAgent } from "langchain"const searchDatabase = tool( ({ query }) => { return `Results for: ${query}` }, { name: "search_database", description: "Search the database.", schema: z.object({ query: z.string().describe("The query to search the database with") }) });const sendEmail = tool( ({ to, subject, body }) => { return `Email sent to ${to}` }, { name: "send_email", description: "Send an email.", schema: z.object({ to: z.string().describe("The email address to send the email to"), subject: z.string().describe("The subject of the email"), body: z.string().describe("The body of the email") }) });// Configure ToolNode with custom error handlingconst toolNode = new ToolNode([searchDatabase, sendEmail], { name: "email_tools", handleToolErrors: (error, toolCall) => { return new ToolMessage({ content: "I encountered an issue. Please try rephrasing your request.", tool_call_id: toolCall.id }); }});// Create agent with the configured ToolNodeconst agent = createAgent({ model: new ChatOpenAI({ model: "gpt-5" }), tools: toolNode, // Pass ToolNode instead of tools list prompt: "You are a helpful email assistant."});// The agent will use your custom ToolNode configurationconst result = await agent.invoke({ messages: [{ role: "user", content: "Search for John and email him" }]})
When you pass a ToolNode to createAgent(), the agent uses your exact configuration including error handling, custom names, and tags. This is useful when you need fine-grained control over tool execution behavior.
runtime: The execution environment of your agent, containing immutable configuration and contextual data that persists throughout the agent’s execution (e.g., user IDs, session details, or application-specific configuration).
Tools can access an agent’s runtime context through the config parameter:
Copy
import { z } from "zod"import { ChatOpenAI } from "@langchain/openai"import { ToolNode, createAgent } from "langchain"const getUserName = tool( (_, config) => { return config.context.user_name }, { name: "get_user_name", description: "Get the user's name.", schema: z.object({}) });const contextSchema = z.object({ user_name: z.string()});const agent = createAgent({ model: new ChatOpenAI({ model: "gpt-4o" }), tools: [getUserName], contextSchema,})const result = await agent.invoke( { messages: [{ role: "user", content: "What is my name?" }] }, { context: { user_name: "John Smith" } });
Accessing long-term memory inside a tool
store: LangChain’s persistence layer. An agent’s long-term memory store, e.g. user-specific or application-specific data stored across conversations.
You can initialize an InMemoryStore to store long-term memory:
Copy
import { z } from "zod";import { createAgent, InMemoryStore } from "langchain";import { ChatOpenAI } from "@langchain/openai";const store = new InMemoryStore();const getUserInfo = tool( ({ user_id }) => { return store.get(["users"], user_id) }, { name: "get_user_info", description: "Look up user info.", schema: z.object({ user_id: z.string() }) });const agent = createAgent({ model: new ChatOpenAI({ model: "gpt-4o" }), tools: [getUserInfo], store,});
Updating long-term memory inside a tool
To update long-term memory, you can use the .put() method of InMemoryStore. A complete example of persistent memory across sessions:
Copy
import { z } from "zod";import { createAgent, tool, InMemoryStore } from "langchain";import { ChatOpenAI } from "@langchain/openai";const store = new InMemoryStore();const getUserInfo = tool( async ({ user_id }) => { const value = await store.get(["users"], user_id); console.log("get_user_info", user_id, value); return value; }, { name: "get_user_info", description: "Look up user info.", schema: z.object({ user_id: z.string(), }), });const saveUserInfo = tool( async ({ user_id, name, age, email }) => { console.log("save_user_info", user_id, name, age, email); await store.put(["users"], user_id, { name, age, email }); return "Successfully saved user info."; }, { name: "save_user_info", description: "Save user info.", schema: z.object({ user_id: z.string(), name: z.string(), age: z.number(), email: z.string(), }), });const agent = createAgent({ llm: new ChatOpenAI({ model: "gpt-4o" }), tools: [getUserInfo, saveUserInfo], store,});// First session: save user infoawait agent.invoke({ messages: [ { role: "user", content: "Save the following user: userid: abc123, name: Foo, age: 25, email: foo@langchain.dev", }, ],});// Second session: get user infoconst result = await agent.invoke({ messages: [ { role: "user", content: "Get user info for user with id 'abc123'" }, ],});console.log(result);// Here is the user info for user with ID "abc123":// - Name: Foo// - Age: 25// - Email: foo@langchain.dev