Welcome to LM Studio Docs!
Learn how to run Llama, DeepSeek, Qwen, Phi, and other LLMs locally with LM Studio.
Explore the docs
Developer
Build with LM Studio's local REST API, OpenAI-compatible APIs, and developer tooling.
Python SDK
Use lmstudio-python to load models, generate text, embed content, and build agents.
TypeScript SDK
Use lmstudio-js in Node.js or TypeScript apps for local model workflows and plugins.
CLI
Use lms for chat, model downloads, daemon management, server control, and publishing.
Integrations
Connect LM Studio to Codex, Claude Code, OpenClaw, MCP tools, and remote workflows.
LM Link
Set up LM Link to route local AI workloads across devices and preferred machines.
To get LM Studio, head over to the Downloads page and download an installer for your operating system.
LM Studio is available for macOS, Windows, and Linux.
What can I do with LM Studio?
- Download and run local LLMs like gpt-oss or Llama, Qwen
- Use a simple and flexible chat interface
- Connect MCP servers and use them with local models
- Search & download functionality (via Hugging Face 🤗)
- Serve local models on OpenAI-like endpoints, locally and on the network
- Manage your local models, prompts, and configurations
System requirements
LM Studio generally supports Apple Silicon Macs, x64/ARM64 Windows PCs, and x64 Linux PCs.
Consult the System Requirements page for more detailed information.
Run llama.cpp (GGUF) or MLX models
LM Studio supports running LLMs on Mac, Windows, and Linux using llama.cpp.
On Apple Silicon Macs, LM Studio also supports running LLMs using Apple's MLX.
To install or manage LM Runtimes, press ⌘ Shift R on Mac or Ctrl Shift R on Windows/Linux.
LM Studio as an MCP client
You can install MCP servers in LM Studio and use them with your local models.
See the docs for more: Use MCP server.
If you're develping an MCP server, check out Add to LM Studio Button.
Run an LLM like gpt-oss, Llama, Qwen, Mistral, or DeepSeek R1 on your computer
To run an LLM on your computer you first need to download the model weights.
You can do this right within LM Studio! See Download an LLM for guidance.
Chat with documents entirely offline on your computer
You can attach documents to your chat messages and interact with them entirely offline, also known as "RAG".
Read more about how to use this feature in the Chat with Documents guide.
Run LM Studio without the GUI (llmster)
llmster is the headless version of LM Studio, no desktop app required. It's ideal for servers, CI environments, or any machine where you don't need a GUI.
Learn more: Headless Mode.
Use LM Studio's API from your own apps and scripts
LM Studio provides a REST API that you can use to interact with your local models from your own apps and scripts.
Community
Join the LM Studio community on Discord to ask questions, share knowledge, and get help from other users and the LM Studio team.