LM StudioLM Studio
Discover, download, and run local LLMs

Supported Architectures Include:

Llama 3.2
Mistral
Phi 3.1
Gemma 2
DeepSeek 2.5
Qwen 2.5

With LM Studio, you can ...

🤖 • Run LLMs on your laptop, entirely offline

📚 • Chat with your local documents (new in 0.3)

👾 • Use models through the in-app Chat UI or an OpenAI compatible local server

📂 • Download any compatible model files from Hugging Face 🤗 repositories

🔭 • Discover new & noteworthy LLMs right inside the app's Discover page

LM Studio supports any GGUF Llama, Mistral, Phi, Gemma, StarCoder, etc model on Hugging Face

Minimum requirements: M1/M2/M3/M4 Mac, or a Windows / Linux PC with a processor that supports AVX2.

Made possible thanks to the llama.cpp project.

We are expanding our team. See our careers page.

Consult the Technical Documentation at https://lmstudio.ai/docs.

Frequently Asked Questions

TLDR: The app does not collect data or monitor your actions. Your data stays local on your machine. It's free for personal use. For business use, please get in touch.

Does LM Studio collect any data?

No. One of the main reasons for using a local LLM is privacy, and LM Studio is designed for that. Your data remains private and local to your machine.

See Documentation > Offline Operation for more.

Can I use LM Studio at work?

We'd love to enable you. Please fill out the LM Studio @ Work request form and we will get back to you as soon as we can.

What are the minimum hardware / software requirements?

Visit the System Requirements page for the most up to date information.

Are you hiring?

See our careers page for open positions.