Model

gemma-3-1b

Public

Tiny text-only variant of Gemma 3: Google's latest open-weight model family

Use cases

Minimum system memory

755MB

Tags

1B
gemma3

README

Quantization aware trained versions of image + text inputs models from Google, built from the same research and tech used to create the Gemini models

Models

Installation with LM Studio CLI

# Using LM Studio CLI
lms get google/gemma-3-1B-it-QAT

Python Example

import lmstudio as lms

model = lms.llm("google/gemma-3-1B-it-QAT")
result = model.respond("What is the meaning of life?")

print(result)

JavaScript Example

import { LMStudioClient } from "@lmstudio/sdk";
const client = new LMStudioClient();

const model = await client.llm.model("google/gemma-3-1B-it-QAT");
const result = await model.respond("What is the meaning of life?");

console.info(result.content);

API Example

curl http://localhost:1234/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "google/gemma-3-1B-it-QAT",
    "messages": [
      { "role": "system", "content": "You are a helpful assistant." },
      { "role": "user", "content": "Explain quantum computing in simple terms." }
    ],
    "temperature": 0.7,
    "max_tokens": 1024
  }'

Sources

The underlying model files this model uses