Skip to main content

Introducing `lms` - LM Studio's companion cli tool

By LM Studio Team

Today, alongside LM Studio 0.2.22, we're releasing the first version of lms — LM Studio's companion cli tool.

With lms you can load/unload models, start/stop the API server, and inspect raw LLM input (not just output). It's developed on github and we're welcoming issues and PRs from the community.

lms ships with LM Studio and lives in LM Studio's working directory, under ~/.cache/lm-studio/bin/. When you update LM Studio, it also updates your lms version. If you're a developer, you can also build lms from source.

Bootstrap lms on your system

You need to run LM Studio at least once before you can use lms.

Afterwards, open your terminal and run one of these commands, depending on your operating system:

# Mac / Linux:
~/.cache/lm-studio/bin/lms bootstrap

# Windows:
cmd /c %USERPROFILE%/.cache/lm-studio/bin/lms.exe bootstrap

Afterwards, open a new terminal window and run lms.

This is the current output you will get:

$ lms
__ __ ___ ______ ___ _______ ____
/ / / |/ / / __/ /___ _____/ (_)__ / ___/ / / _/
/ /__/ /|_/ / _\ \/ __/ // / _ / / _ \ / /__/ /___/ /
/____/_/ /_/ /___/\__/\_,_/\_,_/_/\___/ \___/____/___/

lms - LM Studio CLI - v0.2.22
GitHub: https://github.com/lmstudio-ai/lmstudio-cli

Usage
lms <subcommand>

where <subcommand> can be one of:

- status - Prints the status of LM Studio
- server - Commands for managing the local server
- ls - List all downloaded models
- ps - List all loaded models
- load - Load a model
- unload - Unload a model
- create - Create a new project with scaffolding
- log - Log operations. Currently only supports streaming logs from LM Studio via `lms log stream`
- version - Prints the version of the CLI
- bootstrap - Bootstrap the CLI

For more help, try running `lms <subcommand> --help`

lms is MIT Licensed and it is developed in this repository on GitHub:

https://github.com/lmstudio-ai/lms

Use lms to automate and debug your workflows

  • Start and stop the local server

lms server start
lms server stop
  • List the local models on the machine

lms ls

This will reflect the current LM Studio models directory, which you set in 📂 My Models tab in the app.

  • List the currently loaded models

lms ps
  • Load a model (with options)

lms load [--gpu=max|auto|0.0-1.0] [--context-length=1-N]

--gpu=1.0 means 'attempt to offload 100% of the computation to the GPU'.

  • Optionally, assign an identifier to your local LLM:
lms load TheBloke/phi-2-GGUF --identifier="gpt-4-turbo"

This is useful if you want to keep the model identifier consistent.

  • Unload models

lms unload [--all]

Debug your prompting with lms log stream

lms log stream allows you to inspect the exact input string that goes to the model.

This is particularly useful for debugging prompt template issues and other unexpected LLM behaviors.

$ lms log stream
I Streaming logs from LM Studio

timestamp: 5/2/2024, 9:49:47 PM
type: llm.prediction.input
modelIdentifier: TheBloke/TinyLlama-1.1B-1T-OpenOrca-GGUF/tinyllama-1.1b-1t-openorca.Q2_K.gguf
modelPath: TheBloke/TinyLlama-1.1B-1T-OpenOrca-GGUF/tinyllama-1.1b-1t-openorca.Q2_K.gguf
input: "Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Hello, what's your name?
### Response:
"

lmstudio.js

lms uses lmstudio.js to interact with LM Studio.

You can build your own programs that can do what lms does and much more.

lmstudio.js is in pre-release public alpha. Follow along on GitHub: https://github.com/lmstudio-ai/lmstudio.js.


Discuss all things lms and lmstudio.js in the new #dev-chat channel on the LM Studio Discord Server.

Download LM Studio for Mac / Windows / Linux from https://lmstudio.ai.

LM Studio 0.2.22 AMD ROCm - Technology Preview is available in https://lmstudio.ai/rocm

LM Studio on Twitter: https://twitter.com/LMStudioAI