Forked from google/gemma-3-1b
README
Sources
The underlying model files this model uses
Quantization aware trained versions of image + text inputs models from Google, built from the same research and tech used to create the Gemini models
# Using LM Studio CLI lms get google/gemma-3-1B-it-QAT
import lmstudio as lms model = lms.llm("google/gemma-3-1B-it-QAT") result = model.respond("What is the meaning of life?") print(result)
import { LMStudioClient } from "@lmstudio/sdk"; const client = new LMStudioClient(); const model = await client.llm.model("google/gemma-3-1B-it-QAT"); const result = await model.respond("What is the meaning of life?"); console.info(result.content);
Based on
curl http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "google/gemma-3-1B-it-QAT",
"messages": [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "Explain quantum computing in simple terms." }
],
"temperature": 0.7,
"max_tokens": 1024
}'