38 Downloads
Description
Always-thinking version of Qwen3-30B-A3B featuring significant improvements on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise
Use cases
Minimum system memory
Tags
Last update
Updated on July 30byREADME
Updated version of Qwen3-30B-A3B featuring significant improvements on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise. It improves on general capabilities such as instruction following, tool usage, text generation, and alignment with human preferences.
This thinking-only MoE model uses 3.3B activated parameters from 128 total experts with 8 active at any time. Compared to the original Qwen3-30B-A3B, it delivers substantial gains in long-tail knowledge coverage across multiple languages and markedly better alignment with user preferences in subjective and open-ended tasks.
Supports a context length of up to 262,144 tokens.
Advanced agent capabilities and support for over 100 languages and dialects.
Note: This model supports only thinking mode. Specifying enable_thinking=True is not required.
Parameters
Custom configuration options included with this model
Sources
The underlying model files this model uses