Documentation
POST
lms log stream
to inspect model inputfrom openai import OpenAI client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio") completion = client.chat.completions.create( model="model-identifier", messages=[ {"role": "system", "content": "Always answer in rhymes."}, {"role": "user", "content": "Introduce yourself."} ], temperature=0.7, ) print(completion.choices[0].message)
See https://platform.openai.com/docs/api-reference/chat/create for parameter semantics.
model top_p top_k messages temperature max_tokens stream stop presence_penalty frequency_penalty logit_bias repeat_penalty seed
This page's source is available on GitHub