Raw / Formatted
59 pulls
59 pulls
Raw / Formatted
google/gemma-3-1b
Tiny text-only variant of Gemma 3: Google's latest open-weight model family
GGUF
MLX
Last Updated 5 hours ago
README
Quantization aware trained versions of image + text inputs models from Google, built from the same research and tech used to create the Gemini models
# Using LM Studio CLI lms get google/gemma-3-1B-it-QAT
import lmstudio as lms model = lms.llm("google/gemma-3-1B-it-QAT") result = model.respond("What is the meaning of life?") print(result)
import { LMStudioClient } from "@lmstudio/sdk"; const client = new LMStudioClient(); const model = await client.llm.model("google/gemma-3-1B-it-QAT"); const result = await model.respond("What is the meaning of life?"); console.info(result.content);
curl http://localhost:1234/v1/chat/completions \ -H "Content-Type: application/json" \ -d '{ "model": "google/gemma-3-1B-it-QAT", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Explain quantum computing in simple terms." } ], "temperature": 0.7, "max_tokens": 1024 }'
SOURCES
The underlying model files this model uses
When you download this model, LM Studio picks the source that will best suit your machine (you can override this)
CONFIG
Custom configuration options included with this model