Amazon Bedrock
Amazon Bedrock is a component that lets you generate text or embeddings using Amazon’s Bedrock language models. It connects to the Bedrock API, sends your prompt, and returns the model’s response. You can also stream the response for chat‑style interactions.
How it Works
When you add the Amazon Bedrock component to a workflow, it first looks for a credential of type Amazon Bedrock API that you have set up in Nappai’s credential manager. Once the credential is selected, the component creates a session with AWS using the provided access key, secret key, or a named profile. It then calls the Bedrock runtime service with the chosen model ID and any additional parameters you supply. The response is returned as a simple text message, and the component also exposes the underlying language model object for further use in the workflow.
Inputs
Mapping Mode
This component has a special mode called Mapping Mode. When you enable this mode using the toggle switch, an additional input called Mapping Data is activated, and each input field offers you three different ways to provide data:
- Fixed: You type the value directly into the field.
- Mapped: You connect the output of another component to use its result as the value.
- Javascript: You write Javascript code to dynamically calculate the value.
This flexibility allows you to create more dynamic and connected workflows.
Input Fields
The following fields are available to configure this component. Each field may be visible in different operations:
- Endpoint URL: The full URL of the Bedrock endpoint you want to use. Leave blank to use the default AWS endpoint for the selected region.
- Input: The text prompt or data you want the model to process. This is the main content that will be sent to the model.
- Mapping Mode: Toggle to enable batch processing of multiple records. When on, you can map inputs from a list of data.
- Model ID: The identifier of the Bedrock model you want to use (e.g.,
anthropic.claude-3-haiku-20240307-v1:0
). Choose the model that best fits your task. - Model Kwargs: Optional dictionary of additional keyword arguments to pass to the model (e.g., temperature, max tokens). Use this to fine‑tune the model’s behavior.
- Stream: If checked, the component streams the model’s response back to the workflow. Streaming is only supported for chat‑style models.
- System Message: A system‑level instruction that is sent to the model to guide its behavior (e.g., “You are a helpful assistant”).
Credential
This component requires a credential of type Amazon Bedrock API.
- First, configure the credential in Nappai’s Credentials section.
- Then, select that credential in the component’s Credential field.
The credential must contain the AWS Access Key ID, AWS Secret Access Key, Credentials Profile Name, and AWS Region. These fields are not shown in the component’s input list.
Outputs
- Text: The text response from the Bedrock model. This can be used directly in downstream components or displayed in the dashboard.
- Model: The underlying language model object. This can be passed to other components that accept a language model, such as prompt generators or chain builders.
Usage Example
- Add the Amazon Bedrock component to your workflow.
- Select the credential you created earlier.
- Choose a model (e.g.,
anthropic.claude-3-haiku-20240307-v1:0
). - Enter a prompt in the Input field, such as “Explain quantum computing in simple terms.”
- (Optional) Enable streaming if you want the response to appear incrementally.
- Run the workflow. The component will return the generated text in the Text output, which you can then feed into a display component or another LLM.
Related Components
- OpenAI Model – Generate text using OpenAI’s GPT models.
- Azure OpenAI Model – Access Azure’s OpenAI service.
- Google Gemini Model – Use Google’s Gemini language models.
- Claude Model – Direct integration with Anthropic’s Claude models.
Tips and Best Practices
- Choose the right model: Smaller models are cheaper and faster, while larger models offer higher quality.
- Use streaming for chat: Enable the Stream option when building conversational agents to improve user experience.
- Set sensible temperature: Lower temperatures (e.g., 0.2) produce more deterministic outputs; higher temperatures (e.g., 0.8) add creativity.
- Validate credentials: Double‑check that your AWS keys and region are correct to avoid authentication errors.
- Leverage mapping mode: When processing lists of prompts, enable Mapping Mode to handle each item automatically.
Security Considerations
- Keep your AWS credentials secure and never expose them in public repositories or logs.
- Use Nappai’s credential manager to store secrets; the component will automatically retrieve them at runtime.
- If you enable streaming, be aware that the response is sent in real time; ensure your downstream components can handle partial data.