mistral-nemo

Public

A slightly larger 12B parameter model from Mistral AI, NeMo offers a long 128k token context length, advanced world knowledge, and function calling for developers.

Use cases

Minimum system memory

7GB

Tags

12B
mistral

Last updated

Updated on May 24by
lmmy's profile picture
lmmy

README

Mistral Nemo Instruct 2407 by mistralai

Mistral Nemo was trained up to 128k context, but supports extra with potentially reduced quality.

This model has amazing performance across a series of benchmarks including multilingual.

For more details, check the blog post here: https://mistral.ai/news/mistral-nemo/

Sources

The underlying model files this model uses