Documentation
You can serve local LLMs from LM Studio's Developer tab, either on localhost
or on the network.
LM Studio's APIs can be used through REST API, client libraries like lmstudio-js and lmstudio-python, and OpenAI compatibility endpoints
Load and serve LLMs from LM Studio
To run the server, go to the Developer tab in LM Studio, and toggle the "Start server" switch to start the API server.
Start the LM Studio API Server
Alternatively, you can use lms
(LM Studio's CLI) to start the server from your terminal:
lms server start
lmstudio-js
lmstudio-python
This page's source is available on GitHub