Documentation

Core

LM Studio as a Local LLM API Server

Run an LLM API server on localhost with LM Studio

You can serve local LLMs from LM Studio's Developer tab, either on localhost or on the network.

LM Studio's APIs can be used through REST API, client libraries like lmstudio-js and lmstudio-python, and OpenAI compatibility endpoints

undefined

Load and serve LLMs from LM Studio

Running the server

To run the server, go to the Developer tab in LM Studio, and toggle the "Start server" switch to start the API server.

undefined

Start the LM Studio API Server

Alternatively, you can use lms (LM Studio's CLI) to start the server from your terminal:

lms server start

API options

This page's source is available on GitHub