LM Studio
The LM Studio component lets you generate text by connecting to a local LM Studio server. It pulls the list of available models, lets you pick one, and sends your prompt to the model to get a response.
How it Works
When you add the component, it first contacts the LM Studio API at the base URL you provide (default http://localhost:1234/v1
). It retrieves the list of models so you can choose which one to use. When the workflow runs, the component builds a ChatOpenAI
instance that points to that URL and sends your prompt. The model’s reply is returned as a text message that can be used by other components in the dashboard.
Inputs
Mapping Mode
This component has a special mode called “Mapping Mode”. When you enable this mode using the toggle switch, an additional input called “Mapping Data” is activated, and each input field offers you three different ways to provide data:
- Fixed: You type the value directly into the field.
- Mapped: You connect the output of another component to use its result as the value.
- Javascript: You write Javascript code to dynamically calculate the value.
This flexibility allows you to create more dynamic and connected workflows.
Input Fields
The following fields are available to configure this component. Each field may be visible in different operations:
- Base URL: Endpoint of the LM Studio API. Defaults to
http://localhost:1234/v1
if not specified. - Input: The text prompt you want the model to respond to.
- Mapping Mode: Enable mapping mode to process multiple data records in batch.
- Max Tokens: The maximum number of tokens to generate. Set to 0 for unlimited tokens.
- Model Kwargs: Additional keyword arguments for the model.
- Model Name: Select the model to use from the list of available models.
- Seed: The seed controls the reproducibility of the job.
- Stream: Stream the response from the model. Streaming works only in Chat.
- System Message: System message to pass to the model.
- Temperature: Controls the randomness of the output.
Outputs
- Text: The generated text response from the model (Message).
- Model: The configured language model instance (LanguageModel).
Usage Example
- Drag the LM Studio component onto the canvas.
- In the Base URL field, leave the default
http://localhost:1234/v1
or change it if your LM Studio server runs elsewhere. - Click the Refresh button next to Model Name to load the list of models from the server.
- Select a model (e.g.,
gpt-4o-mini
). - Set Temperature to
0.1
for more deterministic responses. - In the Input field, type a prompt such as “Explain the concept of quantum computing in simple terms.”
- Connect the Text output to a Display component or another LLM component.
- Run the workflow. The component will send the prompt to the chosen LM Studio model and display the generated text.
Related Components
- OpenAI Model – Connect to OpenAI’s hosted models.
- ChatGPT Model – A specialized wrapper for ChatGPT.
- LLM Callback Handler – Handles streaming and logging of LLM responses.
- Mapping Mode Example – Demonstrates how to use mapping mode for batch processing.
Tips and Best Practices
- Keep the Base URL consistent with your LM Studio installation to avoid connection errors.
- Use Max Tokens wisely; setting it too high can increase response time and cost.
- Enable Mapping Mode when you need to process many prompts at once.
- Set a Seed if you need reproducible results across runs.
- Use System Message to guide the model’s behavior (e.g., “You are a helpful assistant.”).
Security Considerations
The component uses a hard‑coded API key (1234
) for demonstration purposes. In a production environment, store the key securely (e.g., in environment variables or a secrets manager) and avoid committing it to source control. Ensure that the LM Studio server is only accessible from trusted networks.