Basic Chain
The Basic Chain component lets you send a prompt and some data to a language model (LLM) and get back a structured answer. It’s useful when you want the LLM to process information and return a JSON‑formatted response that can be used in the rest of your workflow.
How it Works
- Prompt Setup – You provide a System Prompt (instructions for the model) and a Prompt (the question or task).
- Chain Creation – The component builds a chat prompt that combines the system and user prompts.
- LLM Call – It sends this prompt to the chosen LLM.
- JSON Parsing – The LLM’s reply is parsed as JSON, so the output is a clean, machine‑readable object.
- Result – The parsed JSON (or plain text if parsing fails) is returned as the component’s result.
No external APIs are called directly from this component; it relies on the LLM you connect to.
Inputs
Input Fields
- Model: The language model you want to use (e.g., GPT‑4).
- Disable Streaming: Turn off streaming so the LLM returns the full answer at once.
- Input Data: Any data you want the LLM to consider when answering.
- Prompt: The main question or instruction you give to the LLM.
- System Prompt: Background instructions that set the tone or rules for the LLM.
Outputs
- Result: The processed answer from the LLM, parsed as JSON or plain text.
- Runnable: The internal chain object that can be reused or inspected if needed.
Usage Example
- Add the Basic Chain component to your workflow.
- Connect a Model component (e.g., “OpenAI GPT‑4”) to the Model input.
- Set the System Prompt to something like:
You are a helpful assistant that returns answers in JSON format.
- Set the Prompt to:
Summarize the following sales data in a short paragraph.
- Provide Input Data (e.g., a table of sales figures).
- Run the workflow.
- Use the Result output to display the summary or feed it into another component.
Related Components
- LLM – Choose the language model you want to use.
- Chat Prompt – Build custom chat prompts outside of this component.
- JSON Output Parser – Convert raw LLM text into JSON (used internally by Basic Chain).
Tips and Best Practices
- Keep the System Prompt short and clear; it sets the LLM’s behavior.
- If you need the full text of the answer, leave Disable Streaming unchecked.
- Always validate the JSON output; if the LLM returns plain text, the component will still give you that text.
- Use the Runnable output to debug or reuse the chain in other parts of your workflow.
Security Considerations
- Data sent to the LLM may leave your local environment, so avoid including sensitive personal information unless you’re sure the LLM provider complies with your security policies.
- Review the LLM’s privacy policy to understand how data is stored and used.