Description
General purpose reasoning and chat model trained from scratch by NVIDIA. Contains 30B total parameters with only 3.5B active at a time for low-latency MoE inference
Capabilities
Minimum system memory
Tags
Last updated
Updated 8 days agobyREADME
General purpose reasoning and chat model trained from scratch by NVIDIA. Contains 30B total parameters with only 3.5B active at a time for low-latency MoE inference.
Features a reasoning toggle to enable or disable intermediate reasoning traces, with improved accuracy on complex queries when reasoning is enabled. Includes native agentic capabilities for tool use, making it suitable for AI agents, RAG systems, chatbots, and other AI-powered applications. Supports multiple languages including English, Spanish, French, German, Japanese, and Italian.
Supports a context length of 1M tokens.
Custom Fields
Special features defined by the model author
Enable Thinking
: boolean
(default=true)
Controls whether the model will think before replying
Truncate Thinking History
: boolean
(default=false)
Controls whether thinking history will be truncated to save context space
Parameters
Custom configuration options included with this model
Sources
The underlying model files this model uses
Based on