FunctionGemma is a lightweight, open model from Google, built as a foundation for creating your own specialized function calling models.
To run the smallest FunctionGemma, you need at least 786 MB of RAM.
FunctionGemma models support tool use. They are available in gguf and mlx.

FunctionGemma is a lightweight, open model from Google, built as a foundation for creating your own specialized function calling models. FunctionGemma is not intended for use as a direct dialogue model, and is designed to be highly performant after further fine-tuning, as is typical of models this size.
This model is intended for further fine tuning. Once fine tuned for your task, it is recommended to use this model via LM Studio's API.
Note: this model is not suitable for chat use-cases.
Input:
Output:
FunctionGemma is the bridge between natural language and software execution. It is the right tool if:
FunctionGemma is provided under the Gemma license.