State-of-the-art image + text input models from Google, built from the same research and tech used to create the Gemini models
To run the smallest gemma-3, you need at least 550 MB of RAM. The largest one may require up to 16 GB.
gemma-3 models support tool use and vision input. They are available in gguf and mlx.

Gemma is a family of lightweight, state-of-the-art open models from Google, built from the same research and technology used to create the Gemini models. Gemma 3 models are multimodal, handling text and image input and generating text output, with open weights for both pre-trained variants and instruction-tuned variants. Gemma 3 has a large, 128K context window, multilingual support in over 140 languages, and is available in more sizes than previous versions.
Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes it possible to deploy them in environments with limited resources such as laptops, desktops or your own cloud infrastructure, democratizing access to state of the art AI models and helping foster innovation for everyone.
Technical report: https://storage.googleapis.com/deepmind-media/gemma/Gemma3Report.pdf
Input:
Text string, such as a question, a prompt, or a document to be summarized
Images, normalized to 896 x 896 resolution and encoded to 256 tokens each
Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and 32K tokens for the 1B size
Output:
Generated text in response to the input, such as an answer to a question, analysis of image content, or a summary of a document
Total output context of 8192 tokens
These models were evaluated against a large collection of different datasets and metrics to cover different aspects of text generation:
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
|---|---|---|---|---|---|
| HellaSwag | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| BoolQ | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| PIQA | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| SocialIQA | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| TriviaQA | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| Natural Questions | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| ARC-c | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| ARC-e | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| WinoGrande | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| BIG-Bench Hard | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| DROP | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
|---|---|---|---|---|
| MMLU | 5-shot | 59.6 | 74.5 | 78.6 |
| MMLU (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| AGIEval | 3-5-shot | 42.1 | 57.4 | 66.2 |
| MATH | 4-shot | 24.2 | 43.3 | 50.0 |
| GSM8K | 8-shot | 38.4 | 71.0 | 82.6 |
| GPQA | 5-shot | 15.0 | 25.4 | 24.3 |
| MBPP | 3-shot | 46.0 | 60.4 | 65.6 |
| HumanEval | 0-shot | 36.0 | 45.7 | 48.8 |
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
|---|---|---|---|---|
| MGSM | 2.04 | 34.7 | 64.3 | 74.3 |
| Global-MMLU-Lite | 24.9 | 57.0 | 69.4 | 75.7 |
| WMT24++ (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| FloRes | 29.5 | 39.2 | 46.0 | 48.8 |
| XQuAD (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| ECLeKTic | 4.69 | 11.0 | 17.2 | 24.4 |
| IndicGenBench | 41.4 | 57.2 | 61.7 | 63.4 |
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
|---|---|---|---|
| COCOcap | 102 | 111 | 116 |
| DocVQA (val) | 72.8 | 82.3 | 85.6 |
| InfoVQA (val) | 44.1 | 54.8 | 59.4 |
| MMMU (pt) | 39.2 | 50.3 | 56.1 |
| TextVQA (val) | 58.9 | 66.5 | 68.6 |
| RealWorldQA | 45.5 | 52.2 | 53.9 |
| ReMI | 27.3 | 38.5 | 44.8 |
| AI2D | 63.2 | 75.2 | 79.0 |
| ChartQA | 63.6 | 74.7 | 76.3 |
| VQAv2 | 63.9 | 71.2 | 72.9 |
| BLINK | 38.0 | 35.9 | 39.6 |
| OKVQA | 51.0 | 58.7 | 60.2 |
| TallyQA | 42.5 | 51.8 | 54.3 |
| SpatialSense VQA | 50.9 | 60.0 | 59.4 |
| CountBenchQA | 26.1 | 17.8 | 68.0 |
Gemma3 is provided under the custom Gemma license.