Anthropic
The Anthropic component lets you ask questions or give prompts to an Anthropic language model and receive a text answer. It’s a simple “ask‑the‑model” block that can be dropped into any workflow to generate natural‑language responses, build a model for other components, or run in batch mode.
How it Works
When you add the Anthropic block, it connects to the Anthropic API using the API key you stored in Nappai’s credentials.
- The Model Name field selects which Anthropic model (e.g., Claude‑3.5‑Sonnet) you want to use.
- Max Tokens limits how long the answer can be.
- Temperature controls how creative the response is – lower values give more deterministic answers.
- Prefill lets you give the model a starting sentence or instruction.
- System Message can set a role or context for the model.
- Stream streams the answer back as it’s generated (useful for long responses).
- Thinking Budget is an advanced option that tells the model how many tokens it can “think” before producing the final answer.
The component builds a LangChainChatAnthropic
object, sends your prompt, and returns the generated text in the Text output. The Model output gives you the underlying language‑model object so you can reuse it in other parts of your workflow.
Credentials
This component needs an Anthropic API credential.
- In Nappai, go to Credentials and create a new Anthropic API credential.
- Enter your Anthropic API key.
- In the Anthropic block, choose that credential from the Credential dropdown.
The key is never shown in the block’s UI – it’s securely stored in the credential.
Inputs
Mapping Mode
This component has a special mode called Mapping Mode. When you enable this mode using the toggle switch, an additional input called Mapping Data is activated, and each input field offers you three different ways to provide data:
- Fixed: You type the value directly into the field.
- Mapped: You connect the output of another component to use its result as the value.
- Javascript: You write Javascript code to dynamically calculate the value.
This flexibility allows you to create more dynamic and connected workflows.
Input Fields
The following fields are available to configure this component. Each field may be visible in different operations:
- Anthropic API URL: Endpoint of the Anthropic API. Defaults to
https://api.anthropic.com
if not specified. - Input: The prompt or question you want the model to answer.
- Mapping Mode: Enable mapping mode to process multiple data records in batch.
- Max Tokens: The maximum number of tokens to generate. Set to 0 for unlimited tokens.
- Model Name: The specific Anthropic model you want to use.
- Prefill: Prefill text to guide the model’s response.
- Stream: Stream the response from the model. Streaming works only in Chat.
- System Message: System message to pass to the model.
- Temperature: Controls the randomness of the output.
- Thinking Budget: Indicates the thinking budget in tokens (0 means no thinking).
Outputs
- Text: The generated text from the Anthropic model.
- Model: The underlying language‑model object that can be reused by other components.
Usage Example
-
Create a credential
Go to Credentials → Add → Anthropic API → Enter your API key. -
Add the Anthropic block
Drag the Anthropic block onto the canvas. -
Configure the block
- Select the credential you just created.
- Choose a model (e.g.,
claude-3-5-sonnet-20240620
). - Set Max Tokens to 200.
- Set Temperature to 0.2 for a more focused answer.
- Optionally add a System Message like “You are a helpful assistant.”
- Connect the Input field to a text component or a variable that holds your question.
-
Run the workflow
- The block will call the Anthropic API and output the answer in the Text field.
- If you need the model object for another component, use the Model output.
Related Components
- OpenAI – Similar block for OpenAI models.
- LLM Prompt – Build prompts that feed into any language‑model component.
- Mapping – Use mapping to run the same prompt over many records.
Tips and Best Practices
- Keep Max Tokens reasonable to avoid long wait times.
- Use Temperature between 0.1 and 0.3 for reliable, factual answers.
- Enable Stream only when you need real‑time feedback for long responses.
- If you’re processing many records, turn on Mapping Mode and feed a list of prompts into the block.
- Store your API key in a credential; never type it directly into the block.
Security Considerations
- The API key is stored in a credential and never exposed in the UI.
- Ensure that only trusted users have access to the credential.
- Use the Stream option only if your network can handle real‑time data transfer.