Forked from google/gemma-2-27b
Capabilities
Minimum system memory
Tags
Last updated
Updated on January 5byREADME
Sources
The underlying model files this model uses
Gemma 2 features the same extremely large vocabulary from release 1.1, which tends to help with multilingual and coding proficiency.
Gemma 2 27B was trained on a wide dataset of 13 trillion tokens, more than twice as many as Gemma 1.1, and an extra 60% over the 9B model, using similar datasets including:
For more details check out their blog post here: https://huggingface.co/blog/gemma2
Based on
GGUF