45K Downloads
Description
Tiny text-only variant of Gemma 3: Google's latest open-weight model family
Use cases
Minimum system memory
Tags
Last update
Updated on May 17byREADME
Quantization aware trained versions of image + text inputs models from Google, built from the same research and tech used to create the Gemini models
# Using LM Studio CLI lms get google/gemma-3-1B-it-QAT
import lmstudio as lms model = lms.llm("google/gemma-3-1B-it-QAT") result = model.respond("What is the meaning of life?") print(result)
import { LMStudioClient } from "@lmstudio/sdk"; const client = new LMStudioClient(); const model = await client.llm.model("google/gemma-3-1B-it-QAT"); const result = await model.respond("What is the meaning of life?"); console.info(result.content);
curl http://localhost:1234/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "google/gemma-3-1B-it-QAT", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Explain quantum computing in simple terms." } ], "temperature": 0.7, "max_tokens": 1024 }'
Sources
The underlying model files this model uses