LM Studio Embeddings
LM Studio Embeddings lets you turn text into numerical vectors (embeddings) that can be used for searching, clustering, or feeding into other AI models. It connects to a local or remote LM Studio server, pulls the list of available models, and creates embeddings with a chosen model and temperature setting.
How it Works
When you add this component to your workflow, it first asks the LM Studio server for a list of available models. You pick one from the dropdown. The component then sends your text to the server’s embeddings endpoint, receives a vector representation, and outputs that vector. The whole process happens automatically, so you don’t need to write any code.
Inputs
Input Fields
- LM Studio Base URL: The web address where your LM Studio server is running (e.g.,
http://localhost:1234/v1
). This tells the component where to send requests. - Model: The specific LM Studio model you want to use for embeddings. The list is refreshed automatically from the server.
- Model Temperature: A number that controls how random the model’s output can be. Lower values (e.g., 0.1) make the output more deterministic.
Outputs
- Embeddings: A vector representation of the input text. You can feed this output into other components such as similarity search, clustering, or downstream AI models.
Usage Example
- Add the component to your dashboard and connect the text you want to embed to its input.
- Set the LM Studio Base URL to the address of your server (default is
http://localhost:1234/v1
). - Choose a model from the dropdown. If you’re unsure, pick the first one listed.
- Leave the temperature at 0.1 for stable results.
- Run the workflow. The component will output an embeddings vector that you can use in the next step of your automation.
Related Components
- OpenAIEmbeddings – Uses OpenAI’s API to generate embeddings.
- HuggingFaceEmbeddings – Generates embeddings from Hugging Face models.
- VectorStore – Stores and searches embeddings in a database.
Tips and Best Practices
- Check the server is running before running the workflow; otherwise the component will fail to retrieve models.
- Use a specific model that matches your text domain for better quality embeddings.
- Keep the temperature low (e.g., 0.1) unless you need more varied outputs.
- Install
langchain-nvidia-ai-endpoints
if you haven’t already; the component won’t work without it.
Security Considerations
- The LM Studio Base URL may expose sensitive endpoints. Use a secure network or VPN when connecting to a remote server.
- The component currently uses a hard‑coded API key (
1234
). In production, replace this with a secure key or environment variable to protect your credentials.