Summarizer node
The Summarizer node helps you keep your conversation history short and useful.
It takes the messages that have been exchanged, runs them through a language model, and produces a concise summary that can be stored or used in later steps of your workflow.
How it Works
When you add the Summarizer node to a workflow, it collects all the messages that have been stored in the memory so far.
It then sends those messages, together with a prompt you provide, to the language model you selected (the llm input).
The model returns a short summary that fits within the token limits you set.
That summary can be stored back into memory or passed to another component for further processing.
The node does not call any external APIs itself; it relies on the language model you provide.
If you use an OpenAI model, the request will go to the OpenAI API; if you use a local model, the request will stay on your machine.
Inputs
-
llm
The language model that will do the summarization.
Example:openai-gpt-4o-mini
-
Existing Summarization Prompt
The prompt used when there is no existing summary.
Example value:This is summary of the conversation so far: {existing_summary}Extend this summary by taking into account the new messages above: -
Default Final Prompt
The prompt that is used to produce the final summary.
Example value:Summarize the conversation in a few sentences: -
Initial Summarization Prompt
The prompt used when the memory is empty and a new summary must be created.
Example value:Create a summary of the conversation so far: -
Max Summary Tokens
The maximum number of tokens that the summary itself can contain.
Example value:200
-
Max Tokens
The total maximum number of tokens that can be sent to the model, including all messages and the summary.
Example value:1000
-
Output Messages Key
The key under which the summary will be stored in the output messages dictionary.
Example value:summary
Outputs
- SummarizeNode
The node that you can add to your workflow.
It contains the logic to gather messages, call the language model, and return the summary.
You can connect this output to other components that need the summarized text.
Usage Example
- Drag the Summarizer node onto the canvas.
- In the llm field, choose the model you want to use (e.g.,
openai-gpt-4o-mini
). - Leave the prompts at their default values or edit them to fit your style.
- Set Max Summary Tokens to
150
and Max Tokens to800
. - Choose
summary
for Output Messages Key. - Connect the node’s output to a Memory Store component so the summary is saved for future steps.
Now, whenever new messages arrive, the Summarizer node will automatically update the summary, keeping your memory concise and relevant.
Related Components
- LangmemRetrieverNode – Retrieves past messages from memory.
- LangmemMemoryNode – Stores and manages conversation history.
- LangmemPromptNode – Creates prompts for language models.
Tips and Best Practices
- Keep prompts short and clear; long prompts can increase token usage.
- Adjust Max Summary Tokens to balance detail and brevity.
- Test the summarizer with a small set of messages before deploying it in a production workflow.
- If you need the summary in a specific format (e.g., bullet points), include that instruction in the prompt.
Security Considerations
- The language model may send your conversation data to an external provider (e.g., OpenAI).
- If your data is sensitive, consider using a local model or a private deployment.
- Review the privacy policy of the model provider and ensure compliance with your organization’s data‑handling rules.