Retrieval QA
⚠️ DEPRECATION WARNING
This component is deprecated and will be removed in a future version of Nappai. Please migrate to the recommended alternative components.
Retrieval QA lets you ask a question and get an answer that is generated by a language model after searching through a set of documents. It combines a language model, a retriever that pulls relevant documents, and optional memory to keep context between questions.
How it Works
When you run the component, it first uses the Retriever to find documents that match your question. Those documents are then fed into the Model (a language model) along with the question. The model produces an answer. If you enable Return Source Documents, the component also shows which documents were used to build the answer, so you can see the evidence behind the response. The optional Memory can store previous questions and answers, allowing the model to keep track of the conversation.
Inputs
- Model: The language model that will generate the answer.
- Memory: Optional memory that stores past interactions to provide context.
- Retriever: The component that searches your data sources and returns relevant documents.
- Chain Type: The strategy used to combine the retrieved documents and the model’s output. Options are Stuff, Map Reduce, Refine, and Map Rerank.
- Input: The question or prompt you want the model to answer.
- Return Source Documents: If checked, the component will also return the documents that were used to generate the answer.
Outputs
- Text: The answer produced by the language model (type: Message).
- Runnable: A ready‑to‑run chain that can be executed later (type: Runnable).
Usage Example
- Add a Retriever that pulls from your company policy PDFs.
- Add a Model such as OpenAI GPT‑4.
- Connect the Retriever and Model to the Retrieval QA component.
- In the Input field, type “What are the vacation policies for full‑time employees?”
- Check Return Source Documents if you want to see the policy sections that were used.
- Run the workflow. The component will output the answer and, if requested, the source documents.
Related Components
- LLM – Choose the language model you want to use.
- Retriever – Define how documents are fetched (e.g., vector search, keyword search).
- Memory – Store conversation history for context.
- Chain – Build custom chains for more complex logic.
Tips and Best Practices
- Pick the Chain Type that matches your data size: Stuff for small sets, Map Reduce for large collections.
- Enable Return Source Documents when you need auditability or want to show evidence to users.
- Use Memory to keep context across multiple questions, especially in a chat scenario.
- Keep the Input concise; long, complex questions can confuse the model.
Security Considerations
- The text you send to the language model may leave your local environment, so ensure you comply with your organization’s data‑handling policies.
- If using sensitive documents, consider using a private LLM deployment or adding encryption before retrieval.
- Review the Retriever configuration to avoid exposing confidential data to external services.