Skip to content

Basic Chain

The Basic Chain component lets you send a prompt and some data to a language model (LLM) and get back a structured answer. It’s useful when you want the LLM to process information and return a JSON‑formatted response that can be used in the rest of your workflow.

How it Works

  1. Prompt Setup – You provide a System Prompt (instructions for the model) and a Prompt (the question or task).
  2. Chain Creation – The component builds a chat prompt that combines the system and user prompts.
  3. LLM Call – It sends this prompt to the chosen LLM.
  4. JSON Parsing – The LLM’s reply is parsed as JSON, so the output is a clean, machine‑readable object.
  5. Result – The parsed JSON (or plain text if parsing fails) is returned as the component’s result.

No external APIs are called directly from this component; it relies on the LLM you connect to.

Inputs

Input Fields

  • Model: The language model you want to use (e.g., GPT‑4).
  • Disable Streaming: Turn off streaming so the LLM returns the full answer at once.
  • Input Data: Any data you want the LLM to consider when answering.
  • Prompt: The main question or instruction you give to the LLM.
  • System Prompt: Background instructions that set the tone or rules for the LLM.

Outputs

  • Result: The processed answer from the LLM, parsed as JSON or plain text.
  • Runnable: The internal chain object that can be reused or inspected if needed.

Usage Example

  1. Add the Basic Chain component to your workflow.
  2. Connect a Model component (e.g., “OpenAI GPT‑4”) to the Model input.
  3. Set the System Prompt to something like:
    You are a helpful assistant that returns answers in JSON format.
  4. Set the Prompt to:
    Summarize the following sales data in a short paragraph.
  5. Provide Input Data (e.g., a table of sales figures).
  6. Run the workflow.
  7. Use the Result output to display the summary or feed it into another component.
  • LLM – Choose the language model you want to use.
  • Chat Prompt – Build custom chat prompts outside of this component.
  • JSON Output Parser – Convert raw LLM text into JSON (used internally by Basic Chain).

Tips and Best Practices

  • Keep the System Prompt short and clear; it sets the LLM’s behavior.
  • If you need the full text of the answer, leave Disable Streaming unchecked.
  • Always validate the JSON output; if the LLM returns plain text, the component will still give you that text.
  • Use the Runnable output to debug or reuse the chain in other parts of your workflow.

Security Considerations

  • Data sent to the LLM may leave your local environment, so avoid including sensitive personal information unless you’re sure the LLM provider complies with your security policies.
  • Review the LLM’s privacy policy to understand how data is stored and used.