LM StudioLM Studio
ModelsDocsBlogEnterprise
  • Download
  • Login

Login or Signup

Models

Developer Docs

Careers

We're Hiring

Enterprise Solutions

Privacy Policy

Terms of Use

Blog

Blog 👾

Looking for the Changelog? Click here.

Use your LM Studio Models in Claude Code

Run Claude Code with any local model using LM Studio's Anthropic-compatible API

LM Studio Team

•

January 30, 2026

Introducing LM Studio 0.4.0

Server deployment, parallel requests with continuous batching, new REST API endpoint, and refreshed application UI

LM Studio Team

•

January 28, 2026

Open Responses with local models via LM Studio

Update to LM Studio 0.3.39 for Open Responses support

LM Studio Team

•

January 15, 2026

LM Studio 0.3.38

Mac M5 MLX fix, enable optimized MLX auto-upgrade

LM Studio Team

•

January 6, 2026

LM Studio 0.3.37

LFM2 tool call support and a generator stability fix

LM Studio Team

•

January 6, 2026

How to fine-tune FunctionGemma and run it locally

Step by step guide for fine-tuning FunctionGemma with Unsloth, and then running it in LM Studio

LM Studio Team

•

December 23, 2025

LM Studio 0.3.36

Support for Google's FunctionGemma (270M)

LM Studio Team

•

December 18, 2025

LM Studio 0.3.35

Devstral-2, GLM-4.6V, and system prompt fixes

LM Studio Team

•

December 12, 2025

LM Studio 0.3.34

EssentialAI rnj-1 support and a Jinja prompt formatting fix

LM Studio Team

•

December 10, 2025

Ministral 3

LM Studio 0.3.33: Ministral 3 support, Olmo-3 tool calling, and release notes

LM Studio Team

•

December 2, 2025

LM Studio 0.3.32

GLM 4.5 tool calling, olmOCR-2, improved image input handling in /v1/responses, Flash Attention defaults for Vulkan/Metal, and bug fixes.

LM Studio Team

•

November 19, 2025

LM Studio 0.3.31

Image input improvements, MiniMax M2 tool calling, Flash Attention default for CUDA, new CLI runtime management, macOS 26 support, and bug fixes.

LM Studio Team

•

November 4, 2025

OpenAI gpt-oss-safeguard

Open safety reasoning models (120B and 20B) with bring-your-own-policy moderation, now supported in LM Studio on launch day.

LM Studio Team

•

October 29, 2025

NVIDIA DGX Spark

LM Studio now ships for Linux on ARM and launches with NVIDIA DGX Spark — a tiny but mighty Linux ARM box.

LM Studio Team

•

October 14, 2025

LM Studio 0.3.30

Bug fixes: Qwen tool-calling streaming, Vulkan iGPU loading, and developer role support in /v1/responses.

LM Studio Team

•

October 8, 2025

Use OpenAI's Responses API with local models

OpenAI-compatible /v1/responses endpoint (stateful chats, remote mcp, custom tools)

LM Studio Team

•

October 6, 2025

LM Studio 0.3.28

Default model‑variant selection in My Models, better RAM/VRAM estimates (mixed quantizations, mislabeled params, non‑transformers), and bug fixes.

LM Studio Team

•

October 1, 2025

LM Studio 0.3.27: Find in Chat and Search All Chats

Find/Search in chats, model resource estimation (GUI + CLI), CLI polish, and bug fixes.

LM Studio Team

•

September 24, 2025

LM Studio 0.3.26

Stream server, model logs (input and output) with lms log stream, native context menus, and bug fixes.

LM Studio Team

•

September 15, 2025

LM Studio 0.3.25

Select multiple chats for bulk actions, trash bin support, Google EmbeddingGemma, and NVIDIA Nemotron-Nano-v2 with tool calling capabilities.

LM Studio Team

•

September 4, 2025

LM Studio 0.3.24

Support for ByteDance/Seed-OSS, improved markdown code blocks and tables, bug fixes.

LM Studio Team

•

August 28, 2025

LM Studio 0.3.23

Improve in-app chat tool calling reliability for gpt-oss, and ability to place MoE expert weights on CPU

LM Studio Team

•

August 12, 2025

Run OpenAI's gpt-oss locally in LM Studio

We worked with OpenAI to ensure LM Studio supports running gpt-oss models locally on launch day 🎉

LM Studio Team

•

August 5, 2025

LM Studio 0.3.20

Bug fixes, UI improvements, and support for Qwen3-Coder-480B-A35B with tools.

LM Studio Team

•

July 23, 2025

LM Studio 0.3.19

ROCm / Linux support for AMD 9000 series GPUs, bug fixes for model loading, UI improvements, and auto-deletion of engine dependencies to save disk space.

LM Studio Team

•

July 21, 2025

LM Studio 0.3.18

MCP bug fixes and improvements, OpenAI compat API new streaming options and bug fixes, improved tools calling for Mistral models, and UI touchups.

LM Studio Team

•

July 10, 2025

LM Studio is free for use at work

Starting today, it's no longer necessary to get a commercial license for using LM Studio at work. No need to fill out a form or contact us. You and your team can just use the app!

Yagil Burowski

•

July 8, 2025

MCP in LM Studio

New in LM Studio 0.3.17: Model Context Protocol (MCP) Host support. Connect MCP servers to the app and use them with local models.

LM Studio Team

•

June 25, 2025

Introducing the unified multi-modal MLX engine architecture in LM Studio

Leveraging mlx-lm and mlx-vlm to achieve unified multi-modal LLM inference in LM Studio's mlx-engine.

Matt Clayton

•

May 30, 2025

DeepSeek-R1-0528 you can run on your computer

Run the distilled DeepSeek R1 0528 model (8B) locally in LM Studio on Mac, Windows, or Linux with as little as 4GB of RAM. Supports tool use and reasoning.

LM Studio Team

•

May 29, 2025

LM Studio 0.3.16

Public Preview of community presets, automatic deletion of least recently used Runtime Extension Packs, and a way to use LLMs as text embedding models.

LM Studio Team

•

May 23, 2025

LM Studio 0.3.15: RTX 50-series GPUs and improved tool use in the API

Support for CUDA 12, new system prompt editor UI, improved tool use API support, and preview of community presets.

LM Studio Team

•

April 24, 2025

LM Studio 0.3.14: Multi-GPU Controls 🎛️

Advanced controls for multi-GPU setups: enable/disable specific GPUs, choose allocation strategy, limit model weight to dedicated GPU memory, and more.

LM Studio Team

•

March 27, 2025

LM Studio 0.3.13: Google Gemma 3 Support

LM Studio 0.3.13 supports Google's latest multi-modal model, Gemma 3. Run it locally on your Mac, Windows, or Linux machine.

LM Studio Team

•

March 12, 2025

LM Studio 0.3.12

Bug fixes and document chunking speed improvements for RAG

LM Studio Team

•

March 7, 2025

LM Studio 0.3.11

Support for LM Studio SDK (Python, TS/JS), advanced Speculative Decoding settings, and bug fixes

LM Studio Team

•

March 3, 2025

Introducing lmstudio-python and lmstudio-js

Developer SDKs for Python and TypeScript are now available in a 1.0.0 release. A programmable toolkit for local AI software.

LM Studio Team

•

March 3, 2025

LM Studio 0.3.10: đź”® Speculative Decoding

Inference speed up with Speculative Decoding for llama.cpp and MLX

LM Studio Team

•

February 18, 2025

LM Studio 0.3.9

Idle TTL, auto-update for runtimes, support for nested folders in HF repos, and separate reasoning_content in chat completion responses

LM Studio Team

•

January 30, 2025

DeepSeek R1: open source reasoning model

Run DeepSeek R1 models locally and offline on your computer

LM Studio Team

•

January 29, 2025

LM Studio 0.3.8

Thinking UI for DeepSeek R1, LaTeX rendering improvements, and bug fixes

LM Studio Team

•

January 21, 2025

LM Studio 0.3.7

DeepSeek R1 support and KV Cache quantization for llama.cpp models

LM Studio Team

•

January 20, 2025

LM Studio 0.3.6

Tool Calling API in beta, new installer / updater system, and support for Qwen2VL and QVQ (both GGUF and MLX)

LM Studio Team

•

January 6, 2025

Introducing venvstacks: layered Python virtual environments

An open source utility for packaging Python applications and all their dependencies into a portable, deterministic format based on Python's sitecustomize.py.

Alyssa Coghlan,

Yagil Burowski

•

October 31, 2024

LM Studio 0.3.5

Headless mode, on-demand model loading, server auto-start, CLI command to download models from the terminal, and support for Pixtral with Apple MLX.

LM Studio Team

•

October 22, 2024

LM Studio 0.3.4 ships with Apple MLX

Super fast and efficient on-device LLM inferencing using MLX for Apple Silicon Macs.

Yagil Burowski,

Alyssa Coghlan,

Neil Mehta,

Matt Clayton

•

October 8, 2024

LM Studio 0.3.3

Config presets are back! So are live token counts for user input and system prompt. Many bug fixes. Also several new app languages thanks to community contributors.

LM Studio Team

•

September 30, 2024

LM Studio 0.3.2

LM Studio 0.3.2 Release Notes

LM Studio Team

•

August 27, 2024

LM Studio 0.3.1

LM Studio 0.3.1 Release Notes

LM Studio Team

•

August 23, 2024

LM Studio 0.3.0

LM Studio 0.3.0 is here! Built-in (naĂŻve) RAG, light theme, internationalization, Structured Outputs API, Serve on the network, and more.

LM Studio Team

•

August 22, 2024

Llama 3.1

Run Llama 3.1 locally on your computer with LM Studio.

LM Studio Team

•

July 23, 2024

Introducing lms: LM Studio's CLI

A command line tool for scripting and automating your local LLM workflows.

LM Studio Team

•

May 2, 2024

👾 Element Labs, Inc. © 2026
LinkedInGitHubDiscordTwitter / X

Product

Download the appModelsLM Studio HubBeta ReleasesChangelog

Developer

Developer Docslmstudio-jslmstudio-pythonLM Studio CLI (lms)llms.txtllms-full.txt

Company

CareersWe're Hiring!
Blog
Enterprise Solutions

Legal

TermsPrivacy