Skip to content

Sequential Crew

The Sequential Crew component lets you set up a group of AI agents that will work together in a specific order. Each agent takes a task, finishes it, and then passes the result to the next agent. This is useful when you need a step‑by‑step workflow, like gathering data, processing it, and then sending a report.

How it Works

When you add this component to your dashboard, you provide a list of tasks. Each task is linked to an agent that knows how to do that job. The component builds a “crew” of these agents and tells them to run in a strict sequence. It keeps track of the conversation history (memory), can cache results to save time, and can call external services if the agents need them. The final output is a single message that contains the combined result of all the tasks.

Inputs

Input Fields

  • Function Calling LLM: Enables the agents to call external functions directly, which can speed up tasks that need API calls or database queries.
  • Tasks: A list of sequential tasks that the agents will perform. Each task is defined elsewhere in your workflow.
  • Max RPM: Sets the maximum number of requests per minute that the component can make to external services, helping you stay within rate limits.
  • Memory: Allows the crew to remember earlier steps, so later agents can use information from previous tasks.
  • Share Crew: If checked, the same crew can be reused in other parts of the workflow, saving time on setup.
  • Cache: When enabled, the component stores results of tasks so that repeated runs can skip work that hasn’t changed.
  • Verbose: Turns on detailed logging, which is helpful for debugging but can slow down the process.

Outputs

  • Output: A Message object that contains the final result of the entire sequential process. You can feed this output into other components, display it on the dashboard, or store it for later use.

Usage Example

  1. Create a Sequential Crew
    Drag the Sequential Crew component onto the canvas.

  2. Add Tasks
    Connect a “Sequential Task” component for each step (e.g., “Collect Data”, “Analyze Data”, “Generate Report”). Link them in the order you want them to run.

  3. Configure Settings

    • Turn on Function Calling LLM if your agents need to call external APIs.
    • Set Max RPM to 60 to stay within most free tier limits.
    • Enable Cache if you’ll run the same workflow many times.
    • Leave Verbose off for normal operation.
  4. Run the Workflow
    Click “Run” and watch the agents work one after another. The final Output will appear in the output panel.

  5. Use the Result
    Connect the Output to a “Display” component to show the report on the dashboard, or to a “Store” component to save it in a database.

  • Crew – The underlying library that manages groups of agents.
  • Agent – A single AI worker that can perform a task.
  • Task – A unit of work that an agent can execute.
  • Sequential Task – A task that is designed to be part of a sequential crew.

Tips and Best Practices

  • Keep the number of tasks small to avoid long run times.
  • Use Cache when you have repetitive workflows to save on API usage.
  • Enable Verbose only when troubleshooting; it can produce a lot of logs.
  • If your agents need to share data, make sure Memory is enabled.
  • Test each task individually before adding it to the crew to ensure it behaves as expected.

Security Considerations

  • Be cautious when enabling Function Calling LLM; it allows agents to execute code or call APIs, which could expose sensitive data if not properly restricted.
  • Review the permissions of any external services your agents call.
  • Use Cache sparingly if the data is confidential; cached results may persist longer than intended.