GGUF
•
gemma
Google's Gemma 3 27B model in new quantization format that preserves bfloat16 quality
Model info
Model
Gemma 3 27B QAT
Author
Repository
Arch
gemma
Parameters
27B
Format
gguf
Size on disk
about 16.43 GB
Download the model using lms
— LM Studio's developer CLI.
lms get gemma-3-27b-it-qat
curl http://localhost:1234/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gemma-3-27b-it-qat",
"messages": [
{ "role": "system", "content": "Always answer in rhymes." },
{ "role": "user", "content": "Introduce yourself." }
],
"temperature": 0.7,
"max_tokens": -1,
"stream": true
}'
lms log stream
to see your prompts as they are sent to the LLM.lmstudio.js
- LM Studio SDK documentation (TypeScript)lms log stream
- Stream server logslms
- LM Studio's CLI documentation