Skip to content

MistralAI

MistralAI is a component that lets you ask the MistralAI language model to generate text.
You simply provide a prompt, set a few options, and the component returns the model’s reply.

How it Works

When you use MistralAI, the component sends your prompt to the MistralAI API.
It builds a request that includes the model you chose (for example, codestral‑latest), the maximum number of tokens you want, and other settings such as temperature or top‑p.
The API responds with the generated text, which the component passes back to the rest of your workflow.
All communication happens over HTTPS, so your data stays encrypted while it travels between Nappai and the MistralAI servers.

Inputs

Mapping Mode

This component has a special mode called “Mapping Mode”.
When you enable this mode using the toggle switch, an additional input called “Mapping Data” is activated, and each input field offers you three different ways to provide data:

  • Fixed – You type the value directly into the field.
  • Mapped – You connect the output of another component to use its result as the value.
  • Javascript – You write Javascript code to dynamically calculate the value.

This flexibility allows you to create more dynamic and connected workflows.

Credential

This component requires a Mistral AI API credential.

  1. First, add the credential in Nappai’s Credentials section.
  2. Then, select that credential in the Credential field of the component.
    The credential stores your Mistral API Key, which is kept secure by Nappai.

Input Fields

  • Input – The text prompt you want the model to respond to.
  • Mapping Mode – Toggle to enable batch processing of multiple records.
  • Max Concurrent Requests – How many requests can run at the same time.
  • Max Retries – How many times the component will retry if a request fails.
  • Max Tokens – The maximum number of tokens the model can generate. Set to 0 for unlimited.
  • Mistral API Base – The base URL of the Mistral API. Defaults to https://api.mistral.ai/v1.
  • Model Name – Choose which Mistral model to use (e.g., codestral‑latest).
  • Random Seed – A number that seeds the model’s randomness for reproducible results.
  • Safe Mode – Enables the model’s safety filters.
  • Stream – If checked, the model streams its response back as it generates it.
  • System Message – A message that sets the model’s behavior (e.g., “You are a helpful assistant”).
  • Temperature – Controls how creative the output is (0 = deterministic, higher = more random).
  • Timeout – Maximum time (in seconds) to wait for a response before giving up.
  • Top P – Limits the model to the most probable tokens (nucleus sampling).

Outputs

  • Text – The generated text from the model.
  • Model – The configured language model object, which can be reused by other components.

Usage Example

  1. Add the MistralAI component to your workflow.
  2. Configure the credential: In the Credentials section, add a new Mistral AI API credential and paste your API key.
  3. Select the credential in the component’s Credential field.
  4. Set the prompt in the Input field, e.g., “Summarize the following paragraph: …”.
  5. Choose a model (e.g., codestral‑latest) and adjust Temperature if you want a more creative summary.
  6. Run the workflow. The component will return the summary in the Text output, which you can feed into a display component or store in a database.
  • OpenAIModel – Generates text using OpenAI’s GPT models.
  • ChatGPTModel – A specialized wrapper for ChatGPT.
  • LLM – A generic language‑model component that can be configured with any provider.

Tips and Best Practices

  • Keep Temperature low (e.g., 0.2–0.4) for factual or concise responses.
  • Use Top P around 0.9 to balance creativity and relevance.
  • If you need the same prompt for many records, enable Mapping Mode and connect a data source to the Mapping Data input.
  • Turn on Safe Mode if you’re handling sensitive or regulated content.
  • For long responses, set Max Tokens to a higher value or leave it at 0 for unlimited.

Security Considerations

  • Store your Mistral API key only in the credential system; never hard‑code it in the workflow.
  • The component transmits data over HTTPS, but be mindful of the privacy of the prompts you send.
  • If you’re using the component in a public or shared environment, consider enabling Safe Mode to reduce the risk of inappropriate outputs.