Ollama
Ollama is a component in Nappai that lets you use advanced AI to generate text. Think of it as a super-powered text generator that you can customize to fit your needs. It uses local AI models, meaning the processing happens on your own system, not on a remote server.
Relationship with Ollama API
This component connects directly to the Ollama API, which provides access to various AI language models. You choose the model you want to use, and Ollama handles the communication with the API to generate the text.
Inputs
- Base URL: The web address where Ollama is running. Usually, you don’t need to change this; the default setting works in most cases.
- Model Name: Select the specific AI model you want to use from a list. You can find more models at https://ollama.com/library. This list updates automatically.
- Temperature: Controls how creative the AI is. A lower number (like 0.2) makes the text more focused and factual. A higher number makes it more creative and unpredictable.
- Format: (Advanced) Specifies the format of the output (e.g., JSON). Leave this as the default unless you have a specific need for a different format.
- Metadata: (Advanced) Allows you to add extra information to track how the AI generated the text. This is useful for advanced users.
- Mirostat: (Advanced) A setting to help control the consistency and randomness of the text. Leave this as “Disabled” unless you’re familiar with this technique.
- Mirostat Eta: (Advanced) A technical setting related to Mirostat. Leave this at the default unless you need to fine-tune the Mirostat algorithm.
- Mirostat Tau: (Advanced) Another technical setting related to Mirostat. Leave this at the default unless you need to fine-tune the Mirostat algorithm.
- Context Window Size: (Advanced) Controls how much of the previous text the AI considers when generating new text. Leave this at the default unless you have a specific reason to change it.
- Number of GPUs: (Advanced) Specifies the number of graphics processing units (GPUs) to use. Leave this at the default unless you have multiple GPUs and want to speed up processing.
- Number of Threads: (Advanced) Specifies the number of processing threads to use. Leave this at the default unless you want to adjust the processing speed.
- Repeat Last N: (Advanced) Helps prevent the AI from repeating the same phrases over and over. Leave this at the default.
- Repeat Penalty: (Advanced) A numerical value that penalizes repeated phrases. Leave this at the default.
- TFS Z: (Advanced) A technical setting related to text generation sampling. Leave this at the default.
- Timeout: (Advanced) Sets a time limit for the AI to generate text. Leave this at the default.
- Top K: (Advanced) A technical setting related to selecting the best words to use. Leave this at the default.
- Top P: (Advanced) Works together with Top K to control word selection. Leave this at the default.
- Verbose: Indicates whether the generated text should be displayed.
- Tags: (Advanced) Allows you to add comma-separated tags to the execution log.
- Stop Tokens: (Advanced) Specify comma-separated words that tell the AI to stop generating text.
- System: (Advanced) Specifies the system to use for text generation.
- Template: (Advanced) Specifies a template to use for text generation.
Outputs
The Ollama component doesn’t have explicit outputs in the way some other components do. Instead, it generates text based on your settings. This text can then be used by other components in your Nappai workflow. For example, you could use Ollama to generate a summary, which is then sent as an email.
Usage Example
Let’s say you want to summarize a news article. You would connect the “URL” component (which fetches the article text) to the Ollama component. In Ollama, you’d select a suitable model (like “llama3.1”), set a low temperature (e.g., 0.2 for a factual summary), and then connect the output to a “Text Output” component to display the summary.
Templates
[List of templates where the Ollama component is used will be added here once the templates are available.]
Related Components
- Explain SQL results: Use Ollama to generate a human-readable explanation of complex SQL query results.
- Summarizer: Use Ollama to create summaries of large amounts of text.
- Many other components in Nappai can benefit from Ollama’s text generation capabilities.
Tips and Best Practices
- Experiment with different models and temperatures to find the best settings for your needs.
- Start with the default settings and only adjust advanced options if you understand their impact.
- For factual summaries, use a low temperature. For creative writing, use a higher temperature.
Security Considerations
Because Ollama uses local models, the security risks are minimized compared to using cloud-based AI services. However, ensure your Ollama installation is secure and protected from unauthorized access. Do not share sensitive data directly with the Ollama component without proper security measures in place.