Skip to content

Azure OpenAI

Azure OpenAI is a component that lets you send text to Azure’s OpenAI service and receive a generated response. It works like a chat with the model, so you can ask questions, create summaries, or generate new content directly from your dashboard.

How it Works

When you use Azure OpenAI, the component talks to Microsoft’s Azure OpenAI API.

  1. Endpoint – You provide the URL that points to your Azure resource (e.g., https://example-resource.azure.openai.com/).
  2. Deployment – You choose the specific model deployment you want to use (the name you gave it when you created it in Azure).
  3. Credential – The component uses an Azure OpenAI API key that you store securely in Nappai’s credential manager.
  4. Parameters – You can tweak how the model behaves with settings such as temperature (creativity), max tokens (length), and streaming (real‑time output).
  5. Message – You send a prompt or question, and the component returns the model’s reply as a text message.

The component handles all the HTTP calls and authentication for you, so you only need to fill in the fields in the dashboard.

Credential

This component requires an Azure OpenAI API credential.

  1. Go to the Credentials section of Nappai and create a new credential of type Azure OpenAI API.
  2. Enter your Azure OpenAI API key (a password‑type field).
  3. In the component, select the credential you just created from the Credential dropdown.

The credential fields (API key, etc.) are not shown in the component’s input list; they are managed separately.

Inputs

Mapping Mode

This component has a special mode called Mapping Mode. When you enable this mode using the toggle switch, an additional input called Mapping Data is activated, and each input field offers you three different ways to provide data:

  • Fixed: You type the value directly into the field.
  • Mapped: You connect the output of another component to use its result as the value.
  • Javascript: You write Javascript code to dynamically calculate the value.

This flexibility allows you to create more dynamic and connected workflows.

Input Fields

  • API Version: Choose the Azure OpenAI API version you want to use.
  • Deployment Name: The name of the model deployment you created in Azure.
  • Azure Endpoint: Your Azure endpoint URL, including the resource. Example: https://example-resource.azure.openai.com/.
  • Input: The text prompt or question you want the model to respond to.
  • Mapping Mode: Toggle to enable batch processing of multiple records.
  • Max Tokens: The maximum number of tokens the model can generate. Set to 0 for unlimited.
  • Stream: If checked, the model’s response will be streamed back in real time (useful for long replies).
  • System Message: A system‑level instruction you can send to guide the model’s behavior.
  • Temperature: Controls creativity; higher values (e.g., 0.9) produce more varied output, lower values (e.g., 0.2) produce more deterministic output.

Outputs

  • Text: The generated response from the model, returned as a message.
  • Model: The underlying language model object that can be reused by other components.

Usage Example

  1. Set up the credential: Create an Azure OpenAI API credential with your API key.
  2. Add the component to your workflow.
  3. Configure the fields:
    • Azure Endpoint: https://myresource.azure.openai.com/
    • Deployment Name: gpt-35-turbo
    • API Version: 2024-05-13
    • Input: Summarize the following paragraph: "Nappai is a powerful automation platform..."
    • Temperature: 0.5
    • Max Tokens: 150
  4. Run the workflow. The component will return a concise summary in the Text output.
  • OpenAI GPT-3.5 – Uses OpenAI’s public API instead of Azure.
  • OpenAI GPT-4 – Accesses the GPT‑4 model via OpenAI’s API.
  • Azure OpenAI Chat – Similar to this component but tailored for chat‑style interactions.

Tips and Best Practices

  • Keep the Temperature low (e.g., 0.2–0.4) for factual or structured responses.
  • Use Max Tokens to control cost and response length.
  • Enable Stream when you need real‑time feedback for long outputs.
  • Take advantage of Mapping Mode to process many prompts in a single run.
  • Store your API key in a credential, never hard‑code it in the workflow.

Security Considerations

  • Store the Azure OpenAI API key in Nappai’s credential manager and never expose it in the workflow.
  • Use the least‑privilege principle: only grant the credential the permissions it needs.
  • Monitor usage and set quotas in Azure to prevent accidental over‑use.