Skip to content

Deepseek

Deepseek is a component that lets you ask questions or give instructions to the Deepseek language model and get back natural‑language answers. It’s a simple way to add AI text generation to your Nappai workflows.

How it Works

When you use Deepseek, the component sends your prompt (the Input field) to the Deepseek API.

  • The API key you selected in the Credential field authenticates the request.
  • You can tweak how the model behaves with options such as Temperature (how creative the answer is), Max Tokens (how long the answer can be), and Seed (to get the same answer every time).
  • If you turn on Stream, the model will send the answer piece by piece, which is useful for long responses or real‑time displays.
  • The component returns two outputs:
    • Text – the plain text answer from the model.
    • Model – the configured language‑model object that can be reused in other parts of your workflow.

Inputs

Mapping Mode

This component has a special mode called “Mapping Mode”. When you enable this mode using the toggle switch, an additional input called “Mapping Data” is activated, and each input field offers you three different ways to provide data:

  • Fixed: You type the value directly into the field.
  • Mapped: You connect the output of another component to use its result as the value.
  • Javascript: You write Javascript code to dynamically calculate the value.

This flexibility allows you to create more dynamic and connected workflows.

Input Fields

  • Input: The text prompt you want the model to respond to.
  • Mapping Mode: Enable this switch to process multiple records in a batch.
  • Max Tokens: Maximum number of tokens to generate. Set to 0 for unlimited.
  • Model Kwargs: Extra keyword arguments you want to pass to the model.
  • Model Name: Choose which Deepseek model to use.
  • Seed: The seed controls the reproducibility of the job.
  • Stream: Stream the response from the model. Streaming works only in Chat.
  • System Message: System message to pass to the model.
  • Temperature: Controls how creative the model’s responses are.

Credential
This component requires a Deepseek API credential.

  1. First, create the credential in the Nappai credentials section and enter your Deepseek API Key.
  2. Then, select that credential in the component’s Credential field.
    The API key is stored securely and is not shown in the input list.

Outputs

  • Text – The generated text from the Deepseek model.
  • Model – The configured language‑model object that can be passed to other components.

Usage Example

  1. Create a Deepseek component in your workflow.
  2. Set the Credential to the Deepseek API key you created earlier.
  3. Enter a prompt in the Input field, e.g., “Explain the benefits of using AI in customer support.”
  4. Optionally adjust Temperature (e.g., 0.2 for more factual answers) and Max Tokens (e.g., 200).
  5. Connect the Text output to a Display component or a Save to File component to use the answer in your workflow.

If you enable Mapping Mode, you can feed a list of prompts and get a list of responses, all in one run.

  • OpenAI – Generate text using OpenAI’s GPT models.
  • Anthropic – Use Claude models for text generation.
  • Google Gemini – Access Google’s Gemini LLMs.
  • Azure OpenAI – Connect to Azure‑hosted OpenAI services.

These components share a similar interface, so you can swap them out easily if you need a different provider.

Tips and Best Practices

  • Keep Max Tokens reasonable to avoid long wait times and high costs.
  • Use a low Temperature (e.g., 0.1–0.3) for factual or business‑critical responses.
  • Set a Seed if you need consistent results across runs.
  • Turn on Stream only when you need real‑time feedback; otherwise, the default batch mode is simpler.
  • When working with large batches, enable Mapping Mode to process many prompts efficiently.

Security Considerations

  • Store your Deepseek API key in the Nappai credentials store; never hard‑code it in the workflow.
  • The component automatically encrypts the key and never exposes it in logs or outputs.
  • If you share the workflow with others, they will only see the credential reference, not the actual key.