Skip to content

AIML

AIML is a component that lets you generate natural‑language text by calling an AIML (Artificial Intelligence / Machine Learning) language model. It works inside Nappai’s visual workflow editor, so you can plug it into any automation or data‑processing pipeline without writing code.

How it Works

When you add the AIML component to a workflow, Nappai creates a connection to the AIML API (the same API that powers OpenAI’s Chat models).
The component builds a ChatOpenAI client with the settings you provide—model name, temperature, maximum tokens, etc.—and sends the user’s input to the API.
The API returns a text response, which the component outputs as a Message.
Because the component uses Nappai’s credential system, you never type your API key directly into the workflow; you simply select a pre‑configured AI/ML API credential.

Inputs

Mapping Mode

This component has a special mode called “Mapping Mode”. When you enable this mode using the toggle switch, an additional input called “Mapping Data” is activated, and each input field offers you three different ways to provide data:

  • Fixed: You type the value directly into the field.
  • Mapped: You connect the output of another component to use its result as the value.
  • Javascript: You write Javascript code to dynamically calculate the value.

This flexibility allows you to create more dynamic and connected workflows.

Credential
Select the AI/ML API credential you have configured in Nappai’s credentials section. The credential stores your API key securely, so you don’t need to enter it here.

Input Fields

  • AIML API Base: The base URL of the OpenAI API. Defaults to https://api.aimlapi.com. You can change this to use other APIs like JinaChat, LocalAI, or Prem.
  • Input: The data you want the model to process.
  • Mapping Mode: Enable mapping mode to process multiple data records in batch.
  • Max Tokens: The maximum number of tokens to generate. Set to 0 for unlimited tokens.
  • Model Kwargs: Additional keyword arguments for the model (e.g., stop, presence_penalty).
  • Model Name: The name of the model to use (e.g., gpt-4o-mini).
  • Seed: The seed controls the reproducibility of the job.
  • Stream: Stream the response from the model. Streaming works only in Chat.
  • System Message: System message to pass to the model.
  • Temperature: Controls the randomness of the output. Lower values make the output more deterministic.

Outputs

  • Text: The generated text message (method: text_response).
  • Model: The configured language model instance (method: build_model).

Usage Example

  1. Add the AIML component to your workflow.
  2. Select the AI/ML API credential you created earlier.
  3. Set the following fields (example values):
    • Model Name: gpt-4o-mini
    • Temperature: 0.2
    • Max Tokens: 200
    • Input: "Write a short introduction about Nappai."
  4. Run the workflow.
  5. The component outputs the generated text in the Text field, which you can then feed to another component (e.g., a text‑to‑speech converter or a database writer).
  • OpenAIModel – Uses OpenAI’s official API.
  • AnthropicModel – Uses Anthropic’s Claude models.
  • AzureOpenAIModel – Uses Azure’s OpenAI service.

Tips and Best Practices

  • Use Mapping Mode when you need to process a list of inputs in a single run.
  • Keep the Temperature low (e.g., 0.1–0.3) for consistent, factual responses.
  • Set Max Tokens to a reasonable limit to avoid runaway costs.
  • Store your API key in a credential rather than hard‑coding it in the workflow.
  • If you need to pass additional parameters (like stop sequences), use the Model Kwargs field.

Security Considerations

  • The API key is stored in a credential, so it is never exposed in the workflow UI.
  • Ensure that only trusted users have access to the credential in Nappai’s credentials section.
  • When using Stream, be aware that the response is sent in real time; avoid exposing sensitive data in streamed content.