Introducing lms
: LM Studio's CLI
•
2024-05-02
Today, alongside LM Studio 0.2.22, we're releasing the first version of lms
— LM Studio's companion cli tool.
With lms
you can load/unload models, start/stop the API server, and inspect raw LLM input (not just output). It's developed on github and we're welcoming issues and PRs from the community.
lms
ships with LM Studio and lives in LM Studio's working directory, under ~/.lmstudio/bin/
. When you update LM Studio, it also updates your lms
version. If you're a developer, you can also build lms
from source.
lms
on your systemYou need to run LM Studio at least once before you can use lms
.
Afterwards, open your terminal and run one of these commands, depending on your operating system:
# Mac / Linux: ~/.lmstudio/bin/lms bootstrap # Windows: cmd /c %USERPROFILE%/.lmstudio/bin/lms.exe bootstrap
Afterwards, open a new terminal window and run lms
.
This is the current output you will get:
$ lms lms - LM Studio CLI - v0.2.22 GitHub: https://github.com/lmstudio-ai/lmstudio-cli Usage lms <subcommand> where <subcommand> can be one of: - status - Prints the status of LM Studio - server - Commands for managing the local server - ls - List all downloaded models - ps - List all loaded models - load - Load a model - unload - Unload a model - create - Create a new project with scaffolding - log - Log operations. Currently only supports streaming logs from LM Studio via `lms log stream` - version - Prints the version of the CLI - bootstrap - Bootstrap the CLI For more help, try running `lms <subcommand> --help`
lms
is MIT Licensed and it is developed in this repository on GitHub:
https://github.com/lmstudio-ai/lms
lms
to automate and debug your workflowslms server start lms server stop
lms ls
This will reflect the current LM Studio models directory, which you set in 📂 My Models tab in the app.
lms ps
lms load [--gpu=max|auto|0.0-1.0] [--context-length=1-N]
--gpu=1.0
means 'attempt to offload 100% of the computation to the GPU'.
lms load TheBloke/phi-2-GGUF --identifier="gpt-4-turbo"
This is useful if you want to keep the model identifier consistent.
lms unload [--all]
lms log stream
lms log stream
allows you to inspect the exact input string that goes to the model.
This is particularly useful for debugging prompt template issues and other unexpected LLM behaviors.
$ lms log stream I Streaming logs from LM Studio timestamp: 5/2/2024, 9:49:47 PM type: llm.prediction.input modelIdentifier: TheBloke/TinyLlama-1.1B-1T-OpenOrca-GGUF/tinyllama-1.1b-1t-openorca.Q2_K.gguf modelPath: TheBloke/TinyLlama-1.1B-1T-OpenOrca-GGUF/tinyllama-1.1b-1t-openorca.Q2_K.gguf input: "Below is an instruction that describes a task. Write a response that appropriately completes the request. #### Instruction: Hello, what's your name? #### Response: "
lms
uses lmstudio.js to interact with LM Studio.
You can build your own programs that can do what lms
does and much more.
lmstudio.js
is in pre-release public alpha. Follow along on GitHub: https://github.com/lmstudio-ai/lmstudio.js.
Discuss all things lms
and lmstudio.js
in the new #dev-chat
channel on the LM Studio Discord Server.
Download LM Studio for Mac / Windows / Linux from https://lmstudio.ai.
LM Studio 0.2.22 AMD ROCm - Technology Preview is available in https://lmstudio.ai/rocm
LM Studio on Twitter: https://twitter.com/lmstudio