We've added new features to lms log stream. Previously, lms log stream outputted the formatted user messages only.
Starting LM Studio 0.3.26, lms log stream gains a few new options:
--source: choose the log source (e.g. server, model)
--filter: filter logs by type (e.g. input, output, or input,output)
--json: output logs in JSON format.
--stats: output tok/sec and other stats. Works with --source model.
Server logs
Use lms log stream --source server to stream logs from the HTTP API server.
Terminal
$ lms log stream --source server
Streaming logs from LM Studio
[2025-09-15 15:07:55][INFO][LM STUDIO SERVER] Success! HTTP server listening on port 1234
[2025-09-15 15:07:55][INFO]
[2025-09-15 15:07:55][INFO][LM STUDIO SERVER] Supported endpoints:
[2025-09-15 15:07:55][INFO][LM STUDIO SERVER] → GET http://localhost:1234/v1/models
[2025-09-15 15:07:55][INFO][LM STUDIO SERVER] → POST http://localhost:1234/v1/chat/completions
[2025-09-15 15:07:55][INFO][LM STUDIO SERVER] → POST http://localhost:1234/v1/completions
[2025-09-15 15:07:55][INFO][LM STUDIO SERVER] → POST http://localhost:1234/v1/embeddings
[2025-09-15 15:07:55][INFO]
[2025-09-15 15:07:55][INFO][LM STUDIO SERVER] Logs are saved into /Users/yb/.lmstudio/server-logs
[2025-09-15 15:07:55][INFO] Server started.
[2025-09-15 15:07:55][INFO] Just-in-time model loading active.
Model log streaming
You can now stream model output, as well as user input.
Log formatted user message
lms log stream --source model --filter input
Log model output
Note that the model message will be queued up until it's complete, and only then be printed.
lms log stream --source model --filter output
Log both input and output
lms log stream --source model --filter input,output` to stream logs from the user and model
Example output
Terminal
$ lms log stream --source model --filter input,output
Streaming logs from LM Studio
timestamp: 9/15/2025, 3:16:39 PM
type: llm.prediction.input
modelIdentifier: gpt-oss-20b-mlx
modelPath: lmstudio-community/gpt-oss-20b-mlx-8bit
input:
<|start|>system<|message|>You are ChatGPT, a large language model trained by OpenAI.
Knowledge cutoff: 2024-06
Current date: 2025-09-15
Reasoning: medium
# Valid channels: analysis, commentary, final. Channel must be included for every message.<|end|><|start|>user<|message|>hello<|end|><|start|>assistant
timestamp: 9/15/2025, 3:16:40 PM
type: llm.prediction.output
modelIdentifier: gpt-oss-20b-mlx
output:
<|channel|>analysis<|message|>User says "hello". We should respond politely. Provide greeting. Possibly ask how can help. That is straightforward.<|end|><|start|>assistant<|channel|>final<|message|>Hello! 👋 How can I assist you today?
Desktop app improvements
Use native context menus across the app for a consistent feel
Add an "Enclose in Folder" bulk action when selecting multiple chats
Extra mechanisms to ensure child processes are cleaned up when LM Studio receives SIGKILL
Linux fixes
Fix rag-v1 on Linux. In 0.3.25, the built-in embedding model was not included, causing it to fail.
Full Changelog
Build 6
The LM Studio CLI (lms) now supports streaming server logs, as well as model output.
Use lms log stream --source server for server logs
Use lms log stream --source model --filter input,output for both model input and output logs