Documentation
runtime
lmslms ships with LM Studio, so you don't need to do any additional installation steps if you have LM Studio installed.
Just open a terminal window and run lms:
lms --help
lms is MIT Licensed and is developed in this repository on GitHub: https://github.com/lmstudio-ai/lms
| Command | Syntax | Docs |
|---|---|---|
| Chat in the terminal | lms chat | Guide |
| Download models | lms get | Guide |
| List your models | lms ls | Guide |
| See models loaded into memory | lms ps | Guide |
| Control the server | lms server start | Guide |
| Manage the inference runtime | lms runtime | Guide |
👉 You need to run LM Studio at least once before you can use lms.
Open a terminal window and run lms.
lms to automate and debug your workflowslms server start lms server stop
Learn more about lms server.
lms ls
Learn more about lms ls.
This will reflect the current LM Studio models directory, which you set in 📂 My Models tab in the app.
lms ps
Learn more about lms ps.
lms load [--gpu=max|auto|0.0-1.0] [--context-length=1-N]
--gpu=1.0 means 'attempt to offload 100% of the computation to the GPU'.
lms load openai/gpt-oss-20b --identifier="my-model-name"
This is useful if you want to keep the model identifier consistent.
lms unload [--all]
Learn more about lms load and unload.
This page's source is available on GitHub
On this page
Install lms
Open source
Command quick links
Verify the installation
Use lms to automate and debug your workflows
Start and stop the local server
List the local models on the machine
List the currently loaded models
Load a model (with options)
Unload a model