A2A Orchestrator
The A2A Orchestrator is a helper tool that lets you pick the best agent for a given request and then automatically runs that agent. You simply give it a description of what you want to do, and it will decide which agent to use and send the request to that agent.
How it Works
- Choosing an Agent – The component sends your request text and a list of available agents to a language model (LLM). The LLM returns the URL of the agent that should handle the task.
- Running the Agent – Once the agent URL is known, the component builds a payload with your original request and makes an HTTP POST call to that URL. The response from the agent is returned as the execution result.
The component relies on two helper functions:
generate_a2a_select_agent_prompt
creates the prompt that the LLM uses to pick an agent.generate_run_agent_payload
builds the JSON payload that is sent to the chosen agent.
No external APIs beyond the LLM and the chosen agent’s endpoint are used.
Inputs
- Card List: A list of available A2A Card Data that the component can choose from.
- Model: The language model that will decide which agent to use.
- Input: The text describing the task you want to perform.
Outputs
- Selected Agent: The URL of the agent that the LLM selected.
- Execution Result: The JSON response returned by the agent after it processes your request.
Usage Example
- Add the component to your workflow.
- Connect the “Card List” to a component that provides the list of available agents.
- Connect the “Model” to a language‑model component (e.g., OpenAI GPT‑4).
- Enter a task in the “Input” field, such as “Generate a monthly sales report.”
- The component will output the chosen agent’s URL and the agent’s response, which you can then use in subsequent steps of your workflow.
Related Components
- LLM Selector – Choose which language model to use.
- Agent Runner – Manually run a specific agent without automatic selection.
Tips and Best Practices
- Keep the “Card List” up‑to‑date so the LLM can pick the best agent.
- Use a secure, authenticated LLM endpoint to protect sensitive data.
- Verify the agent URLs before connecting them to avoid accidental calls to wrong services.
Security Considerations
- The component sends your input text to the LLM and to the selected agent’s endpoint. Ensure both services use HTTPS and proper authentication.
- Review the payload format to avoid leaking confidential information.
- If the agent URL is user‑supplied, validate it to prevent open‑redirect or injection attacks.