Skip to content

Grok

Grok is a component that lets you generate text with Grok language models directly from your Nappai dashboard. It connects to the GrokAI API, sends your prompt, and returns the model’s response. You can also ask the model to output structured JSON data or stream the reply as it is produced.

How it Works

When you add the Grok component to a workflow, it first checks that you have a valid Grok API key stored in a credential. The component then builds a connection to the GrokAI API using the base URL you provide (by default https://api.x.ai/v1).
When the workflow runs, the component sends the prompt you supply (the Input field) to the selected Grok model. You can choose which model to use from the Model Name dropdown; the component automatically pulls the list of available Grok models from the API if you have a key.
The response can be returned as plain text, or you can ask the model to produce JSON. If you enable JSON Mode or provide a Schema, the component will enforce that the output follows the specified structure. The Stream option lets you receive the reply in real time, which is useful for long responses.

Inputs

Mapping Mode

This component has a special mode called “Mapping Mode”. When you enable this mode using the toggle switch, an additional input called “Mapping Data” is activated, and each input field offers you three different ways to provide data:

  • Fixed: You type the value directly into the field.
  • Mapped: You connect the output of another component to use its result as the value.
  • Javascript: You write Javascript code to dynamically calculate the value.

This flexibility allows you to create more dynamic and connected workflows.

Input Fields

  • Grok API Base: The base URL of the GrokAI API. Defaults to https://api.x.ai/v1. You can change this to use other APIs like JinaChat, LocalAI, or Prem.
  • Input: The text prompt you want the model to respond to.
  • JSON Mode: If True, the model will output JSON regardless of whether you provide a schema.
  • Mapping Mode: Enable mapping mode to process multiple data records in batch.
  • Max Tokens: The maximum number of tokens to generate. Set to 0 for unlimited tokens.
  • Model Kwargs: Additional keyword arguments to pass to the model (e.g., custom parameters).
  • Model Name: The name of the Grok model you want to use. This field is required.
  • Schema: The schema for the output of the model. You must include the word JSON in the prompt. If left blank, JSON mode will be disabled.
  • Seed: The seed controls the reproducibility of the job.
  • Stream: Stream the response from the model. Streaming works only in chat mode.
  • System Message: A system message to pass to the model to guide its behavior.
  • Temperature: Controls the randomness of the output. Lower values make the output more deterministic.

Outputs

  • Text: The raw text response from the model (method: text_response).
  • Model: The configured language model object that can be reused in other components (method: build_model).

Usage Example

  1. Create a Grok credential
    In the Nappai credentials section, add a new credential of type Grok API and paste your Grok API key.

  2. Add the Grok component
    Drag the Grok component onto the canvas.

    • Select the credential you created in the Credential field.
    • Choose a model from the Model Name dropdown (e.g., grok-3-mini-beta).
    • Enter a prompt in the Input field, such as “Explain quantum computing in simple terms.”
    • If you want structured data, enable JSON Mode and provide a simple schema like { "explanation": "string" }.
  3. Run the workflow
    Click “Run” and the component will call the Grok API, then output the text or JSON in the Text output. You can then feed that output into other components, such as a text summarizer or a database writer.

  • OpenAIModel – Generates text using OpenAI’s GPT models.
  • AnthropicModel – Uses Anthropic’s Claude models for text generation.
  • PromptTemplate – Helps build dynamic prompts that can be fed into any LLM component.
  • JSONParser – Parses JSON output from LLMs into structured data for downstream use.

Tips and Best Practices

  • Choose the right model – Smaller models like grok-3-mini-beta are cheaper and faster, while larger models provide more detailed responses.
  • Use JSON Mode for structured data – If you need consistent output (e.g., for a database), enable JSON Mode and supply a schema.
  • Set a reasonable temperature – Lower temperatures (e.g., 0.1) give more deterministic answers; higher temperatures (e.g., 0.8) add creativity.
  • Limit max tokens – To control cost and response length, set a maximum token count that fits your use case.
  • Enable streaming for long responses – If you expect a lengthy reply, turn on Stream to see the output as it arrives.

Security Considerations

  • Keep your API key secret – Store the Grok API key in the Nappai credentials store, not in the component’s input fields.
  • Use least‑privilege credentials – If possible, create a credential with only the permissions needed for the models you use.
  • Monitor usage – Regularly check your Grok API usage to detect any unexpected activity.
  • Avoid exposing sensitive data – Do not include personal or confidential information in prompts unless you are sure the model will not store or leak it.