Documentation
Core
LM Studio REST API
OpenAI Compatible Endpoints
Anthropic Compatible Endpoints
Core
LM Studio REST API
OpenAI Compatible Endpoints
Anthropic Compatible Endpoints
Send requests to Responses, Chat Completions (text and images), Completions, and Embeddings endpoints.
| Endpoint | Method | Docs |
|---|---|---|
/v1/models | GET | Models |
/v1/responses | POST | Responses |
/v1/chat/completions | POST | Chat Completions |
/v1/embeddings | POST | Embeddings |
/v1/completions | POST | Completions |
base url to point to LM StudioYou can reuse existing OpenAI clients (in Python, JS, C#, etc) by switching up the "base URL" property to point to your LM Studio instead of OpenAI's servers.
Note: The following examples assume the server port is 1234
from openai import OpenAI client = OpenAI( + base_url="http://localhost:1234/v1" ) # ... the rest of your code ...
import OpenAI from 'openai'; const client = new OpenAI({ + baseUrl: "http://localhost:1234/v1" }); // ... the rest of your code ...
- curl https://api.openai.com/v1/chat/completions \ + curl http://localhost:1234/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ - "model": "gpt-4o-mini", + "model": "use the model identifier from LM Studio here", "messages": [{"role": "user", "content": "Say this is a test!"}], "temperature": 0.7 }'
Codex is supported because LM Studio implements the OpenAI-compatible POST /v1/responses endpoint.
See: Use Codex with LM Studio and Responses.
Other OpenAI client libraries should have similar options to set the base URL.
If you're running into trouble, hop onto our Discord and enter the #🔨-developers channel.
This page's source is available on GitHub
On this page
Supported endpoints
Set the base url to point to LM Studio
Python Example
Typescript Example
cURL Example
Using Codex with LM Studio