2 Downloads
Capabilities
Minimum system memory
Tags
Last updated
Updated 7 days agobyForked from mistralai/ministral-3-3b
README
Sources
The underlying model files this model uses
The smallest model in the Ministral 3 family, combining a 3.4B language model with a 0.4B vision encoder for efficient edge deployment.
Supports context length of 256k tokens.
Vision-enabled for image analysis and multimodal tasks.
Multilingual support across dozens of languages including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, and more.
Native function calling and JSON output generation.
Apache 2.0 License
Based on