Skip to content

Perplexity

The Perplexity component lets you ask questions or give prompts to the Perplexity language model and get back natural language answers. It’s useful for creating chatbots, summarizing documents, or generating creative content directly from your dashboard.

How it Works

When you use this component, Nappai sends your prompt to the Perplexity API. You choose which model to use (for example, a small or large version of the Llama‑3.1 series) and set options such as how many words to generate, how creative the responses should be, and whether you want multiple replies. The API returns the generated text, which Nappai then passes to the next component in your workflow.

Inputs

Mapping Mode

This component has a special mode called “Mapping Mode”. When you enable this mode using the toggle switch, an additional input called “Mapping Data” is activated, and each input field offers you three different ways to provide data:

  • Fixed: You type the value directly into the field.
  • Mapped: You connect the output of another component to use its result as the value.
  • Javascript: You write Javascript code to dynamically calculate the value.

This flexibility allows you to create more dynamic and connected workflows.

Input Fields

The following fields are available to configure this component. Each field may be visible in different operations:

  • Perplexity API Key: The Perplexity API Key to use for the Perplexity model.
  • Input: The text or prompt you want the model to respond to.
  • Mapping Mode: Enable mapping mode to process multiple data records in batch.
  • Max Output Tokens: The maximum number of tokens to generate.
  • Model Name: Choose which Perplexity model to use (e.g., small, large, or chat versions).
  • N: Number of chat completions to generate for each prompt. Note that the API may not return the full n completions if duplicates are generated.
  • Stream: Stream the response from the model. Streaming works only in Chat.
  • System Message: System message to pass to the model.
  • Temperature: Controls how random the output is; lower values make the model more deterministic.
  • Top K: Decode using top‑k sampling: consider the set of top_k most probable tokens. Must be positive.
  • Top P: The maximum cumulative probability of tokens to consider when sampling.

Outputs

  • Text: The generated text from the model. This is a Message type that can be used in later components.
  • Model: The configured language model instance, which can be passed to other components that accept a LanguageModel.

Usage Example

  1. Drag the Perplexity component onto your canvas.
  2. Connect the output of a “Read File” component (or any text source) to the Input field.
  3. Set Model Name to llama-3.1-sonar-small-128k-online.
  4. Set Max Output Tokens to 200 and Temperature to 0.5 for a concise, factual summary.
  5. Run the workflow.
  6. The Text output will contain the summary, which you can then feed into a “Write File” or “Send Email” component.
  • OpenAIModel – Uses OpenAI’s GPT models for text generation.
  • GeminiModel – Connects to Google Gemini for conversational AI.
  • ClaudeModel – Uses Anthropic’s Claude models for safe, reasoned responses.
  • ChatGPT – A quick way to add ChatGPT to your workflow.

Tips and Best Practices

  • Keep your Perplexity API Key secret; store it in Nappai’s secure key vault.
  • Choose a smaller model for quick, cost‑effective responses; use larger models only when you need more nuance.
  • Lower the Temperature for factual, deterministic answers; raise it for creative or brainstorming tasks.
  • Use Mapping Mode to process many prompts at once, saving time on repetitive tasks.
  • If you need multiple variations, set N to a higher number but remember the API may return fewer unique replies.

Security Considerations

  • Never expose your API key in public workflows or share the workflow file without removing the key.
  • Use environment variables or Nappai’s secret management to store the key securely.
  • Monitor API usage to detect any unexpected activity that could indicate a key compromise.