379 Downloads
The latest generation vision-language MoE model in the Qwen series with comprehensive upgrades to visual perception, spatial reasoning, and image understanding.
Vision Input
The latest generation vision-language MoE model in the Qwen series with comprehensive upgrades to visual perception, spatial reasoning, and video understanding.
Delivers superior vision-language performance across diverse tasks including document analysis, visual question answering, video understanding, and agentic interactions. The MoE architecture provides excellent efficiency while maintaining high-quality outputs. Suitable for deployment on Apple Silicon via MLX quantization.
The underlying model files this model uses
When you download this model, LM Studio picks the source that will best suit your machine (you can override this)
Custom configuration options included with this model