Changelog 👾

Jan 16, 2026

LM Studio 0.3.39

Build 2

  • Fixed a bug where the parameters reported back in /v1/responses API are sometimes incorrect
  • Fixed a bug where input_tokens and cached_tokens were sometimes reported incorrectly in /v1/responses API
  • /v1/responses API now return better formatted errors

Build 1

  • Support for image_url input in OpenAI-compatible /v1/chat/completions REST endpoint
  • Support for top_logprobs in OpenAI-compatible /v1/responses REST endpoint
  • Output cached_tokens statistics in OpenAI-compatible /v1/responses REST endpoint
Jan 13, 2026

LM Studio 0.3.38

Build 1

  • [Mac][M5] Enable auto-upgrade to MLX NAX engine to fix MLX model crashes and improve performance
Jan 6, 2026

LM Studio 0.3.37

Build 1

  • Support for the LFM2 tool call format
  • Fix "Cannot read properties of null (reading 'architecture')" when using a generator
Jan 2, 2026

LM Studio 0.3.36

Build 1

  • FunctionGemma Support
Dec 13, 2025

LM Studio 0.3.35

Build 1

  • [MLX] Support for Devstral-2 and GLM-4.6V
  • Fixed a bug where the default system prompt was still sent to the model even after the system prompt field was cleared.
  • Fixed a bug where exported chats did not include the correct system prompt.
  • Fixed a bug where the token count was incorrect when a default system prompt existed but the system prompt field was cleared.
  • Fixed a bug where sometimes the tool call results were not being added to the context correctly
Dec 9, 2025

LM Studio 0.3.34

Build 1

  • Support for EssentialAI's rnj-1 model
  • Fix jinja prompt formatting bug for some models where EOS tokens were not being included properly
Dec 2, 2025

LM Studio 0.3.33

Build 1

  • Support for MistralAI's Ministral models (3B, 8B, 13B)
  • Support for Olmo-3 tool calling
Nov 19, 2025

LM Studio 0.3.32

Build 2

  • Support for GLM 4.5 tool calling
  • [MLX] Fixed prompt template bug that caused GLM-4.1V to not recognize images
  • Support for olmOCR-2
  • Fix a bug where sometimes Download button would continue showing for an already downloaded model

Build 1

  • Support for passing base64 images into OpenAI-compatible /v1/responses endpoint
    • See https://platform.openai.com/docs/guides/images-vision?api-mode=responses&format=base64-encoded#giving-a-model-images-as-input for details
  • Flash Attention is now enabled by default for Vulkan and Metal llama.cpp engines
  • Fix OpenAI-compatible /v1/responses endpoint "previous_response_not_found" bug due to internal file read error
  • Fix a bug where update toast would sometimes retrigger and close
  • Fix the "No model selected for this chat and no lastUsedModel recorded. Please select a model" error
  • Fixed cases where downloading additional variants of the same model sometime wouldn't get nested correctly
Nov 4, 2025

LM Studio 0.3.31

New in LM Studio 0.3.31:

  • Improve image understanding output quality, especially for OCR tasks
  • Default image attachment size is now 2048px on the longest side for better quality with vision models
    • Can be changed in Settings > Chat > Image Inputs
  • Support for MiniMax M2 tool calling
  • Flash Attention is now enabled by default for CUDA engines
  • New CLI commands to manage engines:
    • lms runtime get
    • lms runtime update
  • Improved macOS 26 compatibility and support

Build 7

  • Fixed a bug where sometimes the Load Model button was greyed out for large MLX models
  • Set Flash Attention on by default for CUDA engines

Build 6

  • Added tool use support for MiniMax M2
  • Added better controls for image input size for vision models

Build 5

  • Update dependencies to support macOS 26
  • Server UX improvements
    • Log v1/chat/completions prompt processing progress to Developer Logs
    • Model-specific v1/chat/completions Developer Logs
    • Log error when JIT model load failed due to guardrails settings

Build 4

  • Fixed "vision.imageResizeSettings key does not exist" error
  • Added settings for configuring maximum image input dimensions for vision models.
    • Accessible via Settings > Chat > Image Inputs > "Image resize bounds"
    • These settings control the maximum dimensions to which input images are resized before being sent to vision-capable models.

Build 3

  • Added lms runtime get and lms runtime update CLI commands to manage runtime extensions from the terminal.
    • Run lms runtime -h for more info.
  • Fix issue where sometimes reasoning models' output would start with a plaintext <think> token

Build 2

  • Increase image resize limits to 1024x1024 for improved image processing performance (superseded: now 2048px)
    • In a future update this will be user configurable (fixed in build 4)

Build 1

  • [MLX] Fix ValueError: Image features and image tokens do not match for Qwen3 VL Thinking models
  • Fix occasional UI crash when searching models
Oct 21, 2025

LM Studio 0.3.30

Build 2

  • [MLX] Fix ValueError: Image features and image tokens do not match for Qwen3 VL Thinking models
  • Fix occasional UI crash when searching models

Build 1

  • Fixed streaming mode bug that was impacting tool calling functionality for Qwen 3 Coder in /v1/chat/completions API.
  • [Vulkan] Fix bug where models were not being loaded onto iGPUs (requires runtime update)
  • Compatibility support for the developer role in /v1/responses API endpoint. For now, developer messages will be processed as system messages internally.