Model

embedding-gemma-300m

Public

Embedding model from Google, built from Gemma 3 (with T5Gemma initialization) and the same research and technology used to create Gemini models

Use cases

Minimum system memory

300MB

Tags

300M
gemma-embedding

README

EmbeddingGemma

Open embedding model from Google, built from Gemma 3 (with T5Gemma initialization) and the same research and technology used to create Gemini models. 300M parameters, state-of-the-art for its size.

Well-suited for search and retrieval tasks, including classification, clustering, and semantic similarity search.

Trained with data in 100+ spoken languages. Supports a context length of 2048 tokens.

Use via LM Studio's embedding APIs in Python, TypeScript, or utilize the OpenAI compatible /v1/embeddings endpoint.

For more technical details, refer to the paper: EmbeddingGemma: Powerful and Lightweight Text Representations.

Sources

The underlying model files this model uses