OpenAI
The OpenAI component lets you send a prompt to an OpenAI language model and receive a text response. It can also return the configured model object so you can reuse it elsewhere in your workflow. The component automatically pulls the list of available OpenAI models so you can pick the one that best fits your needs.
How it Works
When you add the component to your dashboard, it first checks that you have an OpenAI API credential selected. It then calls the OpenAI API to retrieve all available models and fills the Model Name dropdown with those that can generate text (e.g., gpt-4
, gpt-3.5-turbo
, etc.).
When you run the workflow, the component builds a ChatOpenAI
instance with the chosen settings: model name, temperature, maximum tokens, and any extra keyword arguments. If you enable JSON Mode or provide a Schema, the component forces the model to return JSON. The response is streamed back to the dashboard if you enable streaming.
Inputs
Mapping Mode
This component has a special mode called “Mapping Mode”. When you enable this mode using the toggle switch, an additional input called “Mapping Data” is activated, and each input field offers you three different ways to provide data:
- Fixed: You type the value directly into the field.
- Mapped: You connect the output of another component to use its result as the value.
- Javascript: You write Javascript code to dynamically calculate the value.
This flexibility allows you to create more dynamic and connected workflows.
Input Fields
The following fields are available to configure this component. Each field may be visible in different operations:
- Custom Model Name: The custom model name to use. It will replace the model selected in the dropdown if not empty.
- Input: The main prompt or text you want the model to process.
- JSON Mode: If True, it will output JSON regardless of passing a schema.
- Mapping Mode: Enable mapping mode to process multiple data records in batch.
- Max Tokens: The maximum number of tokens to generate. Set to 0 for unlimited tokens.
- Model Kwargs: Extra keyword arguments to pass to the model (e.g.,
stop
,presence_penalty
). - Model Name: The name of the OpenAI model to use. The dropdown is populated automatically from your OpenAI account.
- OpenAI API Base: The base URL of the OpenAI API. Defaults to
https://api.openai.com/v1
. You can change this to use other APIs like JinaChat, LocalAI, or Prem. - Schema: The schema for the output of the model. You must pass the word JSON in the prompt. If left blank, JSON mode will be disabled.
- Seed: The seed controls the reproducibility of the job.
- Stream: Stream the response from the model. Streaming works only in Chat.
- System Message: System message to pass to the model.
- Temperature: Controls randomness. Lower values make the output more deterministic.
Credential
This component requires an OpenAI API credential.
- Configure the credential in the Credentials section of Nappai.
- Select that credential in the component’s Credential field.
The credential stores the OpenAI API Key; it is not shown in the input list.
Outputs
- Text: The raw text response from the model (Message type).
- Model: The configured
ChatOpenAI
instance (LanguageModel type) that can be reused in other components.
Usage Example
- Add the component to your workflow.
- Select a credential that contains your OpenAI API Key.
- Choose a model from the dropdown (e.g.,
gpt-4
). - Enter a prompt in the Input field, such as “Summarize the following paragraph: …”.
- (Optional) Set Max Tokens to limit the length of the answer.
- Run the workflow. The Text output will contain the model’s reply, and the Model output can be passed to another component that needs the same model configuration.
Related Components
- Chat – A higher‑level component that handles conversation context.
- LLM – A generic language‑model wrapper that can use any supported LLM provider.
- JSON Parser – Use this after the OpenAI component to extract fields from a JSON response.
Tips and Best Practices
- Keep Temperature low (e.g., 0.1–0.3) for factual or deterministic tasks.
- Use JSON Mode with a Schema when you need structured data back from the model.
- Enable Stream for long responses to see the output in real time.
- If you need to run the same prompt on many records, turn on Mapping Mode and connect a data source to the Mapping Data input.
- Store the Model output in a variable and reuse it in other components to avoid re‑initializing the model.
Security Considerations
- The OpenAI API Key is stored securely in the credential system and never exposed in the component’s UI.
- Ensure that only trusted users have access to the credential.
- When using JSON Mode, validate the output against the schema to guard against malformed data.