LLMCheckerChain
⚠️ DEPRECATION WARNING
This component is deprecated and will be removed in a future version of Nappai. Please migrate to the recommended alternative components.
LLMCheckerChain lets you ask a question and receive an answer that the system checks for accuracy before you use it. It’s a handy tool when you need the AI to double‑check its own response.
How it Works
LLMCheckerChain is built on LangChain’s LLMCheckerChain
. When you run it, the component sends the text you provide (the Input) to the language model you selected (the Model). The model generates an answer, and the chain then verifies that answer against the original question. The verified result is returned as a simple text message. All of this happens locally in the dashboard; the only external service involved is the language model you choose.
Inputs
- Model: The language model you want to use (e.g., OpenAI GPT‑4).
- Input: The question or prompt you want the model to answer.
Outputs
- Text: The verified answer returned as a
Message
. - Runnable: A reusable
Runnable
object that can be connected to other components in your workflow.
Usage Example
- Add the component to your workflow.
- Select a model in the Model field (e.g., “OpenAI GPT‑4”).
- Enter a question in the Input field, such as:
What is the capital of France?
- Run the workflow.
- Read the result in the Text output.
Paris
If you want to chain this component with others, drag the Runnable output into the next component’s input. This lets you build more complex pipelines that start with a verified answer.
Related Components
- LLMChainComponent – A modern alternative for simple question‑answering without built‑in verification.
- LLMCheckerChain (legacy) – The older version of this component, now deprecated.
- LLMChain – The core LangChain chain for sending prompts to an LLM.
Tips and Best Practices
- Choose a reliable LLM: The quality of the verification depends on the model’s accuracy.
- Keep prompts concise: Shorter inputs reduce the chance of misinterpretation.
- Handle errors gracefully: If the model fails to verify, the component will return an empty string; add a fallback component to manage such cases.
- Use the Runnable output: Reusing the chain as a Runnable saves time when you need the same verification logic in multiple places.
Security Considerations
- Data privacy: The text you send to the LLM is transmitted to the provider’s servers. Ensure that the provider complies with your organization’s data‑handling policies.
- Model access control: Restrict who can configure the Model input to prevent unauthorized use of expensive or sensitive LLMs.
- Audit logs: Enable logging for the component to keep a record of inputs and outputs for compliance purposes.