Documentation

OpenAI Compatible Endpoints

OpenAI Compatibility Endpoints

Send requests to Responses, Chat Completions (text and images), Completions, and Embeddings endpoints.

Supported endpoints

EndpointMethodDocs
/v1/modelsGETModels
/v1/responsesPOSTResponses
/v1/chat/completionsPOSTChat Completions
/v1/embeddingsPOSTEmbeddings
/v1/completionsPOSTCompletions

Set the base url to point to LM Studio

You can reuse existing OpenAI clients (in Python, JS, C#, etc) by switching up the "base URL" property to point to your LM Studio instead of OpenAI's servers.

Note: The following examples assume the server port is 1234

Python Example

from openai import OpenAI

client = OpenAI(
+    base_url="http://localhost:1234/v1"
)

# ... the rest of your code ...

Typescript Example

import OpenAI from 'openai';

const client = new OpenAI({
+  baseUrl: "http://localhost:1234/v1"
});

// ... the rest of your code ...

cURL Example

- curl https://api.openai.com/v1/chat/completions \
+ curl http://localhost:1234/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
-     "model": "gpt-4o-mini",
+     "model": "use the model identifier from LM Studio here",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

Using Codex with LM Studio

Codex is supported because LM Studio implements the OpenAI-compatible POST /v1/responses endpoint.

See: Use Codex with LM Studio and Responses.


Other OpenAI client libraries should have similar options to set the base URL.

If you're running into trouble, hop onto our Discord and enter the #🔨-developers channel.

This page's source is available on GitHub