NVIDIA Embeddings
This component helps Nappai understand the meaning of text by converting it into numerical codes called “embeddings.” Think of it like translating words into a secret code that computers can easily understand and compare. This allows Nappai to perform tasks like finding similar documents or categorizing information more effectively.
Relationship with NVIDIA AI
This component uses NVIDIA’s powerful AI models and APIs to generate these embeddings. It connects directly to NVIDIA’s servers to perform the complex calculations needed.
Inputs
- Model: Choose the NVIDIA AI model you want to use. The default option is usually a good starting point. You can select from a list of available models.
- NVIDIA Base URL: This is the web address where Nappai connects to NVIDIA’s services. You usually don’t need to change this, but a refresh button is available if needed.
- Credential: This is your access key to use NVIDIA’s services. Nappai will guide you on how to securely manage this.
- Model Temperature (Advanced): This setting controls how creative the model is. A lower value makes the embeddings more consistent, while a higher value introduces more randomness. Leave this at the default unless you have a specific reason to change it.
Outputs
This component doesn’t directly show an output in the dashboard. Instead, it generates the embeddings which are then used by other Nappai components, such as those that search for similar text or organize information. You’ll see the results of these embeddings in the actions performed by those connected components.
Usage Example
Imagine you want to find documents similar to a specific query. You would use this component to create embeddings of your query and the documents. Then, a vector database component (like Pinecone or Weaviate) would use these embeddings to find the closest matches.
Templates
[List of templates where the component can be seen and its configuration - This section needs to be populated with actual template names from the Nappai system.]
Related Components
- Semantic Text Splitter: This component breaks down large texts into smaller, more manageable chunks before creating embeddings.
- Couchbase, Upstash, Chroma DB, Weaviate, Vectara, Redis, PGVector, FAISS, Astra DB, Qdrant, Pinecone, MongoDB Atlas, Milvus, Supabase, Cassandra, Text Embedder: These are all different databases that store and search embeddings. The NVIDIA Embeddings component works with many of these to help Nappai find information efficiently.
Tips and Best Practices
- Start with the default model settings. Changing them might require advanced knowledge of AI models.
- Ensure your NVIDIA credentials are securely stored and managed.
- If you encounter errors, check the NVIDIA Base URL to make sure it’s correct.
Security Considerations
Always protect your NVIDIA API credentials. Do not share them with others. Nappai employs security measures to protect your credentials, but it’s crucial to follow best practices for secure access management.