Agent
Run any LangChain agent using a simplified interface.
How it Works
The Agent component lets you pick a pre‑built LangChain agent (like “ZeroShotAgent”, “ChatOpenAI”, etc.) and run it with a language model, a set of tools, and a prompt. When you drop the component into a workflow, you choose the agent, supply the LLM you want to use, add any tools the agent can call, and write a prompt that tells the LLM what to do. The component builds a chat prompt that includes the system message, the user’s question, and a place for the agent to write its next step. It then calls the agent’s function, which may ask the tools, generate text, or ask for more information. The final answer is returned as plain text.
The component can also pull a prompt from LangChain Hub if you provide an API key and the agent has a hub repository. This lets you use ready‑made prompts without writing them yourself.
Inputs
-
Agent: The name of the LangChain agent you want to run.
Visible in: All -
LLM: The language model that will generate text.
Visible in: All -
Tools: A list of tools the agent can call (e.g., a calculator, a web search tool).
Visible in: All -
Prompt: The user prompt that will be sent to the LLM. It can contain placeholders like
{input}
that will be replaced with the value you provide in the Input field.
Visible in: All -
System Message: A short instruction that is sent to the LLM before the user prompt. It sets the tone or role of the assistant.
Visible in: All -
Tool Template: How each tool is shown in the prompt. The default shows the tool name and description.
Visible in: All -
Handle Parsing Errors: If checked, the agent will try to recover from errors when it can’t parse the LLM’s output. If unchecked, the component will raise an error.
Visible in: All -
Message History: A list of previous messages that will be included in the chat history sent to the agent.
Visible in: All -
Input: The text you want the agent to process (e.g., a question or a task description).
Visible in: All -
LangChain Hub API Key: Optional key that lets the component fetch prompts from LangChain Hub.
Visible in: All
Outputs
The component returns a single text string – the answer produced by the chosen agent. You can feed this output into other components, display it in a dashboard, or use it to trigger further actions.
Usage Example
- Drag the Agent component onto the canvas.
- In the Agent field, pick “ZeroShotAgent”.
- Connect an LLM component (e.g., OpenAI GPT‑4) to the LLM input.
- Add a Tool component (e.g., a calculator) and connect it to the Tools input.
- In the Prompt field, type:
You are a helpful assistant. Answer the user’s question: {input}
- In the Input field, type: “What is 12 times 8?”
- Run the workflow. The component will call the agent, which will use the calculator tool, and the output will be “96”.
Related Components
- LLM – Choose the language model that the agent will use.
- Tool – Define a tool that the agent can call (e.g., calculator, database query).
- Chat Prompt – Build custom prompts that include system messages and user input.
Tips and Best Practices
- Keep the Prompt short and clear; the agent will read it as part of the conversation.
- Use the System Message to set the assistant’s role (e.g., “You are a friendly tutor”).
- If you need the agent to remember earlier messages, add them to Message History.
- Enable Handle Parsing Errors when you want the agent to try again automatically if the LLM output is malformed.
- When using LangChain Hub prompts, make sure the API key is kept secret and not exposed in shared workflows.
Security Considerations
The Agent component sends the user’s input and any message history to the chosen LLM, which may be an external service. Ensure that sensitive data is not sent to untrusted models. If you need to keep data on‑premises, use a local LLM or a private deployment. Also, be careful with tools that can execute code or access external systems; only enable tools you trust.