Skip to content

OpenRouter

OpenRouter is a component that lets you tap into a wide range of AI models—whether they come from OpenAI, Anthropic, or other providers—using one simple interface. Once you set it up, you can choose the provider, pick a model, and adjust how creative the responses should be.

How it Works

When you add the OpenRouter component to your workflow, it first talks to the OpenRouter API to pull a list of available models. The provider (e.g., OpenAI, Anthropic) and the specific model name are shown in two dropdown menus that update automatically.
After you pick a model, the component builds a ChatOpenAI instance that points to https://openrouter.ai/api/v1. It passes your API key (stored in a credential) and any optional settings such as temperature, maximum tokens, or custom headers for site URL and app name.
When the workflow runs, the component sends your prompt to the chosen model and returns the generated text. If you enable Mapping Mode, you can feed many prompts at once and get a batch of responses.

Inputs

Mapping Mode

This component has a special mode called “Mapping Mode”. When you enable this mode using the toggle switch, an additional input called “Mapping Data” is activated, and each input field offers you three different ways to provide data:

  • Fixed: You type the value directly into the field.
  • Mapped: You connect the output of another component to use its result as the value.
  • Javascript: You write Javascript code to dynamically calculate the value.

This flexibility allows you to create more dynamic and connected workflows.

Input Fields

The following fields are available to configure this component. Each field may be visible in different operations:

  • App Name: Your app name for OpenRouter rankings.
  • Input: The text prompt you want the model to respond to.
  • Mapping Mode: Enable mapping mode to process multiple data records in batch.
  • Max Tokens: Maximum number of tokens to generate.
  • Customized Model Name: Override the model name shown in the dropdown.
  • Model: The model to use for chat completion.
  • Provider: The AI model provider (required).
  • Site URL: Your site URL for OpenRouter rankings.
  • Stream: Stream the response from the model. Streaming works only in Chat.
  • System Message: System message to pass to the model.
  • Temperature: Controls randomness. Lower values are more deterministic, higher values are more creative.

Credential
This component requires an OpenRouter API credential.

  1. In the Nappai dashboard, go to Credentials and create a new OpenRouter API credential.
  2. Enter your OpenRouter API Key.
  3. Back in the component, select this credential in the Credential field.
    The API key is stored securely and never exposed in the workflow.

Outputs

  • Text: The generated message from the AI model.
  • Model: The configured language model instance (useful if you need to pass it to another component).

Usage Example

  1. Add the OpenRouter component to your workflow.
  2. Select a credential that contains your OpenRouter API Key.
  3. Choose a provider (e.g., OpenAI) from the dropdown.
  4. Pick a model (e.g., gpt-4o-mini).
  5. Set Temperature to 0.5 for more focused answers.
  6. Enter your prompt in the Input field.
  7. Run the workflow.
  8. The Text output will contain the AI’s reply, and the Model output can be reused elsewhere.

If you want to process many prompts at once, toggle Mapping Mode, connect a data source to Mapping Data, and the component will return a list of responses.

Tips and Best Practices

  • Start with a low temperature (e.g., 0.3) for business or technical queries; increase it if you need more creative output.
  • Use Max Tokens to keep responses concise and avoid unnecessary costs.
  • Enable Mapping Mode when you have a spreadsheet or database of prompts; it saves time and keeps your workflow tidy.
  • Set Site URL and App Name if you want your usage to appear in OpenRouter’s usage statistics.
  • Test with a small prompt first to confirm the provider and model are working before scaling up.

Security Considerations

  • Store your OpenRouter API Key in a credential; never hard‑code it in the workflow.
  • The component uses HTTPS to communicate with the OpenRouter API, ensuring data is encrypted in transit.
  • If you enable Stream, the response is sent in real time, but the data is still protected by the same secure channel.