Capabilities
Minimum system memory
Tags
Last updated
Updated 2 days agobyForked from google/gemma-3-27b
README
Sources
The underlying model files this model uses
Optimized with Quantization Aware Training for improved 4-bit performance.
Supports a context length of 128k tokens, with a max output of 8192.
Multimodal supporting images normalized to 896 x 896 resolution.
Gemma 3 models are well-suited for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning.
Based on