140.6K Downloads
Description
A powerful 30B MoE coding model from Alibaba Qwen, joining its larger 480B counterpart
Use cases
Minimum system memory
Tags
Last update
Updated on August 2byREADME
Specialized coding model with native support for 256K context length
MoE model with 3.3B activated weights out of 30.5B total parameters, 128 experts with 8 active
Excels at agentic coding, browser automation, and repository-scale code understanding
Advanced tool calling capabilities supporting platforms like Qwen Code and CLINE
Optimized for coding tasks with significant performance improvements in agentic workflows
Note: This model does not support thinking mode and will not generate <think></think> blocks
Parameters
Custom configuration options included with this model
Sources
The underlying model files this model uses
Based on