Documentation

lms log stream

lms log stream allows you to inspect the exact input string that goes to the model.

This is particularly useful for debugging prompt template issues and other unexpected LLM behaviors.


Pro Tip

If you haven't already, bootstrap lms on your machine by following the instructions here.

Debug your prompts with lms log stream

lms log stream allows you to inspect the exact input string that goes to the model.

Open a terminal and run the following command:

lms log stream

This will start streaming logs from LM Studio. Send a message in the Chat UI or send a request to the local server to see the logs.

Example output

$ lms log stream
I Streaming logs from LM Studio

timestamp: 5/2/2024, 9:49:47 PM
type: llm.prediction.input
modelIdentifier: TheBloke/TinyLlama-1.1B-1T-OpenOrca-GGUF/tinyllama-1.1b-1t-openorca.Q2_K.gguf
modelPath: TheBloke/TinyLlama-1.1B-1T-OpenOrca-GGUF/tinyllama-1.1b-1t-openorca.Q2_K.gguf
input: "Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Hello, what's your name?
### Response:
"