5.9K Downloads
Tiny text-only variant of Gemma 3: Google's latest open-weight model family
Last Updated 26 days ago
Quantization aware trained versions of image + text inputs models from Google, built from the same research and tech used to create the Gemini models
# Using LM Studio CLI lms get google/gemma-3-1B-it-QAT
import lmstudio as lms model = lms.llm("google/gemma-3-1B-it-QAT") result = model.respond("What is the meaning of life?") print(result)
import { LMStudioClient } from "@lmstudio/sdk"; const client = new LMStudioClient(); const model = await client.llm.model("google/gemma-3-1B-it-QAT"); const result = await model.respond("What is the meaning of life?"); console.info(result.content);
curl http://localhost:1234/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "google/gemma-3-1B-it-QAT", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Explain quantum computing in simple terms." } ], "temperature": 0.7, "max_tokens": 1024 }'
The underlying model files this model uses
When you download this model, LM Studio picks the source that will best suit your machine (you can override this)
Custom configuration options included with this model