Reflection Agent
The Reflection Agent lets you build a Langgraph assistant that can review and improve its own answers. It runs the main assistant, then asks a separate “judge” agent to score and comment on the reply. The final response is the one that passes the judge’s criteria or the best one found after a set number of reflection cycles.
How it Works
- Main Assistant – The component first runs the main Langgraph graph (the Agent input). This graph produces an initial answer to the user’s question.
- Reflection Cycle – The answer is sent to the Judge Agent, which must return a JSON object containing a
score
and acomment
.score
tells whether the answer is good enough (e.g., 1 = acceptable, 0 = needs improvement).comment
gives a short explanation or suggestion for improvement.
- Iteration – If the score is below the threshold, the component can repeat the cycle up to Max Iterations times. Each cycle can optionally modify the prompt or add the judge’s comment to the next attempt.
- Output – Once a satisfactory answer is found or the maximum iterations are reached, the component outputs:
- The compiled agent graph (
Agent
output) – useful if you want to reuse the same agent elsewhere. - The final message (
Response
output) – the answer shown to the user. - A tool representation (
Tool
output) – if you want to expose the agent as a reusable tool in other workflows.
- The compiled agent graph (
No external APIs are called directly by this component; it relies on the Langgraph framework and the language model you provide.
Inputs
- Agent: The main assistant graph that will generate responses.
- Judge Agent: The reflection agent that will evaluate responses. It must return a JSON object with
score
andcomment
keys. - Model: The language model to use for the agent if no main agent is provided.
- Comment Key: The key in the judge’s response that contains the comment.
- Agent Description: A useful description when using the agent as a tool.
- Agent Name: The name of the reflection agent to use.
- Input: The user’s message or prompt that will be sent to the main agent.
- Max Execution Time: The maximum execution time in seconds for the entire reflection process.
- Max Iterations: The maximum number of reflection cycles before accepting the response.
- Score Key: The key in the judge’s response that indicates whether the response is satisfactory.
- Show Reflections: If checked, the component will display each reflection step in the dashboard.
- Tool Schema: Metadata schema to use for the Agent as a Tool.
- Stream: If checked, the response will be streamed back to the user as it is generated.
- Verbose: If checked, the component will log detailed debugging information.
Outputs
- Agent – A compiled Langgraph graph (
CompiledGraph
) that can be reused elsewhere. - Response – The final message (
Message
) that will be shown to the user. - Tool – A reusable tool (
BaseTool
) that can be added to other workflows.
Usage Example
- Add the Reflection Agent to your workflow.
- Connect the main assistant graph to the Agent input.
- Connect a simple judge graph (e.g., a small Langgraph that checks for a keyword or length) to the Judge Agent input.
- Set Max Iterations to 3 and Show Reflections to true so you can see how the answer improves.
- Run the workflow. The component will keep refining the answer until the judge scores it as satisfactory or the iteration limit is reached. The final answer appears in the Response output.
Related Components
- LanggraphAgent – Builds a basic Langgraph assistant without reflection.
- LanggraphReflectionBase – The base class that provides the reflection logic.
- CompiledGraph – Represents a ready‑to‑run Langgraph graph.
- BaseTool – Allows you to expose the agent as a tool in other workflows.
Tips and Best Practices
- Keep Max Iterations low (e.g., 2–3) to avoid long runtimes.
- Choose a model that balances speed and quality; larger models often produce better reflections.
- Use Show Reflections during testing to understand how the judge influences the answer.
- If you need the agent to act as a tool, fill in Tool Schema with the required metadata.
- Set Verbose only when troubleshooting; it can generate a lot of logs.
Security Considerations
- The component relies on the language model you provide; keep your API keys secure and follow the provider’s best‑practice guidelines.
- The judge agent should not expose sensitive data; ensure its output is sanitized before being used in the final response.
- If you enable Stream, be aware that partial responses may be visible to users before the final answer is ready.