Alpha Notice: These docs cover the v1-alpha release. Content is incomplete and subject to change.For the latest stable version, see the v0 LangChain Python or LangChain JavaScript docs.
Overview
LangChain’screate_agent
runs on LangGraph’s runtime under the hood.
LangGraph exposes a Runtime object with the following information:
- Context: static information like user id, db connections, or other dependencies for an agent invocation
- Store: a BaseStore instance used for long-term memory
- Stream writer: an object used for streaming information via the
"custom"
stream mode
Access
When creating an agent withcreate_agent
, you can specify a context_schema
to define the structure of the context
stored in the agent Runtime.
When invoking the agent, pass the context
argument with the relevant configuration for the run:
Inside tools
You can access the runtime information inside tools to:- Access the context
- Read or write long-term memory
- Write to the custom stream (ex, tool progress / updates)
langgraph.runtime
to access the Runtime object inside a tool.
Inside prompt
Use the get_runtime function fromlanggraph.runtime
to access the Runtime object inside a prompt function.
Inside pre and post model hooks
To access the underlying graph runtime information in a pre or post model hook, you can:- Use the get_runtime function from
langgraph.runtime
to access the Runtime object inside the hook - Inject the Runtime directly via the hook signature
- Using get_runtime
- Injection