Build 2
/v1/responses API are sometimes incorrectinput_tokens and cached_tokens were sometimes reported incorrectly in /v1/responses API/v1/responses API now return better formatted errorsBuild 1
image_url input in OpenAI-compatible /v1/chat/completions REST endpointtop_logprobs in OpenAI-compatible /v1/responses REST endpointcached_tokens statistics in OpenAI-compatible /v1/responses REST endpointBuild 1
Build 1
Build 1
Build 1
Build 1
Build 1
Build 2
Build 1
/v1/responses endpoint
/v1/responses endpoint "previous_response_not_found" bug due to internal file read errorNew in LM Studio 0.3.31:
lms runtime getlms runtime updateBuild 7
Build 6
Build 5
v1/chat/completions prompt processing progress to Developer Logsv1/chat/completions Developer LogsBuild 4
vision.imageResizeSettings key does not exist" errorBuild 3
lms runtime get and lms runtime update CLI commands to manage runtime extensions from the terminal.
lms runtime -h for more info.<think> tokenBuild 2
Build 1
ValueError: Image features and image tokens do not match for Qwen3 VL Thinking modelsBuild 2
ValueError: Image features and image tokens do not match for Qwen3 VL Thinking modelsBuild 1
developer role in /v1/responses API endpoint. For now, developer messages will be processed as system messages internally.