Scorer
The Scorer component lets you turn any piece of text or data into a clear, numeric rating. By feeding it a set of items and a set of rules, it will return a score between 0 and 100 for each item. You can also ask the AI to explain why it gave that score.
How it Works
The Scorer uses an AI language model (the “Model” input) to read the data you provide.
- Chunking – If the data is long, it is split into smaller pieces (chunks). You can control how big each chunk is and how much they overlap.
- Scoring – For each chunk, the AI evaluates it against the criteria you give.
- Aggregation – Scores from all chunks are combined into a single score for the whole item.
- Justification (optional) – If you tick “Include Justification”, the AI also writes a short explanation for the score.
No external APIs are called beyond the language model you supply, so everything runs locally or in your chosen cloud environment.
Inputs
- Data: The items you want to score. This can be plain text, structured data, or a message object.
- Model: The language model that will perform the scoring.
- Additional Context: Optional extra instructions or background that helps the AI understand the items better.
- chunk overlap: Number of characters that overlap between adjacent chunks when the text is split.
- chunk size: Size of each chunk in characters.
- Criteria: The rules that define what a good score looks like. For example, “Clarity of the message: clear=high score, unclear=low score”.
- Include Justification: Check this if you want the AI to explain its score.
- Max chunks: The maximum number of chunks the component will process.
Outputs
- Score: A list of Data objects, each containing the score (0‑100) and, if requested, a justification.
- Tool: A reusable tool that can be called by other components or agents to perform the same scoring operation.
Usage Example
- Goal: Rate the clarity of customer support emails.
- Setup:
- Drag the Scorer component into your workflow.
- Connect the output of a “Text Extractor” (which pulls email bodies) to the Data input.
- Choose a GPT‑4 model for the Model input.
- In Criteria, type:
Clarity of the message: clear=high score, unclear=low score
- Leave Additional Context empty.
- Set chunk size to 1200 and chunk overlap to 200.
- Check Include Justification.
- Run: The component will return a score for each email and a short explanation, which you can feed into a dashboard or a reporting tool.
Related Components
- Text Analyzer – Breaks text into key points before scoring.
- Data Validator – Ensures the input data meets required formats.
- Score Aggregator – Combines scores from multiple Scorer components.
Tips and Best Practices
- Keep chunk size moderate (800‑1500 characters) to balance speed and context.
- Use Max chunks to limit processing time on very large documents.
- If you only need a quick numeric rating, leave Include Justification unchecked to save compute.
- Test your Criteria with a few sample items first; small wording changes can shift scores dramatically.
Security Considerations
- The Scorer sends all input data to the language model you provide. If you use a cloud‑based model, ensure your data complies with privacy regulations.
- For sensitive data, consider running the component on a local or on‑premises model.
- The component does not store any data after execution, but logs may contain the raw input if you enable verbose logging.