Skip to content

Groq

Groq is a component that lets you generate text with Groq’s AI models directly from your Nappai dashboard. It connects to the Groq API, sends your prompt, and returns the model’s response. You can also use it to build a reusable language model that other components can call.

How it Works

When you add the Groq component, Nappai first asks you to select a Groq API credential. That credential stores the API key and base URL you need to talk to Groq’s servers.
After the credential is chosen, the component builds a connection to the Groq API using the settings you provide (model name, temperature, max tokens, etc.). When you run the workflow, the component sends your prompt to the API, receives the generated text, and outputs it as a message. If you enable streaming, the text arrives piece‑by‑piece so you can see the answer as it is being produced.

Inputs

Mapping Mode

This component has a special mode called Mapping Mode. When you enable this mode using the toggle switch, an additional input called Mapping Data is activated, and each input field offers you three different ways to provide data:

  • Fixed: You type the value directly into the field.
  • Mapped: You connect the output of another component to use its result as the value.
  • Javascript: You write Javascript code to dynamically calculate the value.

This flexibility allows you to create more dynamic and connected workflows.

Input Fields

The following fields are available to configure this component. Each field may be visible in different operations:

  • Groq API Base: The base URL for the Groq API. Leave it blank if you are not using a proxy or emulator.
  • Groq API Key: The secret key that authenticates your requests to Groq.
  • Input: The text prompt you want the model to respond to.
  • Mapping Mode: Toggle to enable batch processing of multiple records.
  • Max Output Tokens: The maximum number of tokens the model can generate in one response.
  • Model: The specific Groq model you want to use (e.g., llama3-70b-8192).
  • N: How many different completions the model should produce for each prompt.
  • Stream: If checked, the model’s response will be streamed back in real time.
  • System Message: A message that sets the behavior or context for the model.
  • Temperature: Controls randomness; 0.0 is deterministic, 1.0 is more creative.

Important: Before using this component, configure a Groq API credential in the Nappai credentials section and select that credential in the component’s Credential field. The credential stores the Groq API Base and Groq API Key, so you do not need to enter them again here.

Outputs

  • Text: The generated text from the Groq model, returned as a message that can be used by downstream components.
  • Model: A reusable language model object that other components can call to generate text without re‑configuring the connection.

Usage Example

  1. Create a Groq API credential

    • Go to CredentialsAdd CredentialGroq API.
    • Enter your Groq API Base (e.g., https://api.groq.com) and Groq API Key.
  2. Add the Groq component to your workflow

    • Drag the component onto the canvas.
    • In the Credential field, select the Groq credential you just created.
    • Choose a model (e.g., llama3-70b-8192).
    • Set Temperature to 0.1 for a more factual response.
    • Set Max Output Tokens to 200.
    • Connect the Input field to a text component that supplies a prompt, such as “Summarize the following paragraph: …”.
  3. Run the workflow

    • The component sends the prompt to Groq, receives the summary, and outputs it in the Text field.
    • You can then feed that text into a Text to PDF component or display it on a dashboard.
  • OpenAIModel – Connects to OpenAI’s GPT models.
  • AnthropicModel – Uses Anthropic’s Claude models.
  • AzureOpenAIModel – Calls Azure‑hosted OpenAI services.
  • TextSplitter – Splits long documents into chunks before sending them to a language model.

These components share a similar interface, so you can swap them out if you prefer a different provider.

Tips and Best Practices

  • Keep your API key secret – Never expose it in public workflows or share screenshots that include the key.
  • Use streaming for long responses – Enabling Stream lets users see the answer as it arrives, improving perceived speed.
  • Limit token usage – Set Max Output Tokens to a reasonable number to control cost and response length.
  • Test with small prompts first – Verify the model’s behavior before scaling up to large documents.
  • Leverage Mapping Mode – When processing many records, enable Mapping Mode to batch prompts and reduce API calls.

Security Considerations

  • Store the Groq API key in a credential, not in the workflow itself.
  • Restrict access to the credential to only the users who need it.
  • Monitor API usage to detect any unexpected spikes that could indicate misuse.