Forked from qwen/qwen3-coder-30b
Description
A powerful 30B MoE coding model from Alibaba Qwen, joining its larger 480B counterpart
Capabilities
Minimum system memory
Tags
Last updated
Updated 2 days agobyREADME
Specialized coding model with native support for 256K context length
MoE model with 3.3B activated weights out of 30.5B total parameters, 128 experts with 8 active
Excels at agentic coding, browser automation, and repository-scale code understanding
Advanced tool calling capabilities supporting platforms like Qwen Code and CLINE
Optimized for coding tasks with significant performance improvements in agentic workflows
Note: This model does not support thinking mode and will not generate <think></think> blocks
Parameters
Custom configuration options included with this model
Sources
The underlying model files this model uses
Based on