Skip to content

Should Run Next

Should Run Next is a small helper that asks an AI model whether a step in your workflow can run next. It takes a question, some context, and asks the model to answer “yes” or “no”. If the answer is “no”, the workflow stops at that point.

How it Works

When you drop the component into a flow, you give it three pieces of information:

  1. Question – the prompt you want the AI to answer.
  2. Context – any text or data that the AI can use to decide.
  3. Retries – how many times the component should try if the AI gives an unclear answer.

The component builds a short prompt that looks like this:

Given the following question and the context below, answer with a yes or no.

{error_message}

Question: {question}

Context: {context}

Answer:

It sends that prompt to the language model you selected. If the model returns “yes”, the component lets the flow continue. If it returns “no”, the component stops the flow so nothing after it runs. The component also keeps a status message that you can see in the dashboard: “Should Run Next: True” or “Should Run Next: False”.

Inputs

  • llm: The language model you want to use (e.g., OpenAI GPT‑4).

    • Visible in: All uses of the component.
  • question: The question you want the AI to answer.

    • Visible in: All uses of the component.
  • context: Text or data that gives the AI the information it needs to decide.

    • Visible in: All uses of the component.
  • retries: How many times to retry if the AI’s answer isn’t a clear “yes” or “no”.

    • Visible in: All uses of the component.

Outputs

The component outputs the context you supplied. That means you can pass the same context on to the next part of your workflow. The real effect of the component is the side‑effect of stopping the flow when the answer is “no”.

Usage Example

Imagine you have a step that should only run if a customer’s email address is verified. You could set up the component like this:

  1. Question: “Is the customer’s email verified?”
  2. Context: The customer record that includes a field email_verified.
  3. LLM: Choose your preferred model.

If the AI says “yes”, the next step (e.g., sending a welcome email) runs. If it says “no”, the flow stops and no email is sent.

  • LLM – The language model component that actually runs the AI.
  • Prompt – A component that lets you build custom prompts if you want more control.
  • Stop – A component that can halt a flow manually; “Should Run Next” uses a similar stop mechanism internally.

Tips and Best Practices

  • Keep the question short and clear; long questions can confuse the model.
  • Use a reliable LLM with good accuracy for yes/no decisions.
  • If you need to debug why a step stopped, check the status message “Should Run Next: False” in the dashboard.
  • Try setting retries to 1 or 2; most models give a clear answer on the first try.

Security Considerations

  • The component sends your question and context to the chosen LLM, so keep sensitive data out of the context or use a private model.
  • The component itself does not store or log the data; it only passes it to the LLM and returns the context.

That’s all you need to know to use “Should Run Next” in your Nappai dashboards.