NVIDIA
The NVIDIA component lets you ask questions or give prompts to an NVIDIA language model and get a text answer back. It works inside Nappai’s visual workflow builder, so you can drop it into a flow, connect it to other components, and let the model do the heavy lifting.
How it Works
When you add the NVIDIA component, Nappai creates a connection to NVIDIA’s AI endpoint.
- The component sends your prompt (the Input field) to the chosen model (e.g., mistralai/mixtral-8x7b-instruct-v0.1).
- It uses the NVIDIA API Key you stored in the credentials section to authenticate.
- The model returns a text response, which the component outputs as a Message.
- If you need the underlying model object for other parts of the workflow, the component also exposes a LanguageModel output.
Inputs
Mapping Mode
This component has a special mode called “Mapping Mode”. When you enable this mode using the toggle switch, an additional input called “Mapping Data” is activated, and each input field offers you three different ways to provide data:
- Fixed: You type the value directly into the field.
- Mapped: You connect the output of another component to use its result as the value.
- Javascript: You write Javascript code to dynamically calculate the value.
This flexibility allows you to create more dynamic and connected workflows.
Input Fields
- NVIDIA Base URL: The base URL of the NVIDIA API. Defaults to
https://integrate.api.nvidia.com/v1
. - Input: The text prompt you want the model to respond to.
- Mapping Mode: Enable mapping mode to process multiple data records in batch.
- Max Tokens: The maximum number of tokens to generate. Set to 0 for unlimited tokens.
- Model Name: Choose which NVIDIA model to use.
- Seed: The seed controls the reproducibility of the job.
- Stream: Stream the response from the model. Streaming works only in Chat.
- System Message: System message to pass to the model.
- Temperature: Controls how creative the model’s responses are.
Credential
This component requires a credential of type NVIDIA API.
- First, configure the NVIDIA API credential in the Credentials section of Nappai.
- Then, select that credential in the Credential field of the component.
The credential stores the NVIDIA API Key securely; you do not need to enter it here.
Outputs
- Text: A Message containing the model’s text response (method:
text_response
). - Model: A LanguageModel object that can be reused by other components (method:
build_model
).
Usage Example
- Add the NVIDIA component to your flow.
- Set the Credential to the NVIDIA API key you created earlier.
- Choose a Model (e.g., mistralai/mixtral-8x7b-instruct-v0.1).
- Enter a Prompt in the Input field, such as “Explain quantum computing in simple terms.”
- Run the flow.
- The Text output will contain the model’s explanation, which you can feed into a Display component or another LLM for further processing.
Related Components
- OpenAI Model – Connects to OpenAI’s GPT models.
- Anthropic Model – Uses Anthropic’s Claude models.
- Chat – Handles conversational flows with multiple turns.
- Text Splitter – Splits large documents into smaller chunks before sending to an LLM.
Tips and Best Practices
- Choose the right model: Larger models (e.g., meta/llama3-70b-instruct) give more detailed answers but cost more.
- Adjust Temperature: Lower values (e.g., 0.1) make responses more deterministic; higher values (e.g., 0.8) add creativity.
- Use Mapping Mode for batch processing of many prompts at once.
- Set Max Tokens to control response length and cost.
- Seed is useful when you need reproducible results for testing.
Security Considerations
- Keep your NVIDIA API Key confidential; store it only in the credentials section.
- Avoid exposing the key in logs or shared workflows.
- Use the Credential field to reference the key instead of hard‑coding it in the component.