Login or Signup
Models
Developer Docs
Careers
Enterprise Solutions
Privacy Policy
Terms of Use
Blog
6
0
Description
A slightly larger 12B parameter model from Mistral AI, NeMo offers a long 128k token context length, advanced world knowledge, and function calling for developers.
Stats
31.7K Downloads
6 stars
Capabilities
Minimum system memory
Tags
Last updated
README
Mistral Nemo was trained up to 128k context, but supports extra with potentially reduced quality.
This model has amazing performance across a series of benchmarks including multilingual.
For more details, check the blog post here: https://mistral.ai/news/mistral-nemo/
Sources
The underlying model files this model uses
Based on
GGUF
Product
Developer
Company
Legal