Skip to content

LLMMathChain

The LLMMathChain component lets you ask the system to solve math problems by writing a short prompt. It takes your text, sends it to a language model, and then runs the model’s suggested Python code to calculate the answer. The result is returned as a simple text message that you can use elsewhere in your workflow.

⚠️ DEPRECATION WARNING

This component is deprecated and will be removed in a future version of Nappai. Please migrate to the recommended alternative components.

How it Works

When you provide a prompt in the Input field, the component forwards that prompt to the selected language model (the Model input). The model generates a short Python snippet that performs the requested calculation. The component then runs this snippet locally, captures the output, and returns it as a text message. All of this happens automatically, so you don’t need to write any code yourself.

Inputs

  • Model: The language model that will interpret your prompt and generate the Python code.
  • Input: The text prompt you want the model to solve. For example, “What is 12 × 7?” or “Integrate x² from 0 to 3.”

Outputs

  • Text: A Message containing the result of the calculation. This can be used as input to other components in your workflow.

Usage Example

  1. Drag the LLMMathChain component onto your canvas.
  2. Connect a LanguageModel component (e.g., OpenAI GPT‑4) to the Model input.
  3. In the Input field, type: What is 12 × 7?
  4. Run the workflow.
  5. The Text output will contain 84, which you can then feed into a notification component or store in a database.
  • PythonComponent – Execute arbitrary Python code directly.
  • LLMChain – Send a prompt to a language model and receive a text response.
  • MathChain – Perform basic arithmetic operations without involving a language model.

Tips and Best Practices

  • Keep prompts short and clear to reduce the chance of the model generating incorrect code.
  • Use a reliable language model that supports code generation for best results.
  • If you need to perform many calculations, consider batching them into a single prompt to reduce latency.

Security Considerations

Executing Python code generated by a language model can be risky. Make sure you trust the model and the environment in which the code runs. Avoid using this component with untrusted input or in a production environment that handles sensitive data unless you have proper safeguards in place.