167 pulls

Raw / Formatted

google/gemma-3-12b

Min.7GB
gemma3
12B

State-of-the-art image + text input models from Google, built from the same research and tech used to create the Gemini models

Tool use

GGUF

MLX

Last Updated   1 day ago

README

gemma 3 12b it GGUF by google

Supports a context length of 128k tokens, with a max output of 8192.

Multimodal supporting images normalized to 896 x 896 resolution.

Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning.

Requires latest (currently beta) llama.cpp runtime.

SOURCES

The underlying model files this model uses

When you download this model, LM Studio picks the source that will best suit your machine (you can override this)

CONFIG

Custom configuration options included with this model

No custom configuration.