Capabilities
Minimum system memory
Tags
Last updated
Updated on November 28byForked from qwen/qwen2.5-vl-7b
README
Sources
The underlying model files this model uses
Qwen2.5-VL-7B-Instruct is a vision-language model that processes images, text, and video, supporting structured outputs and visual localization. It can analyze charts, graphics, and layouts, and is capable of temporal reasoning over long video sequences.
The model is intended for use in document analysis, event detection, and extracting structured data from visual content. Outputs include bounding boxes, points, and structured JSON data.
Based on
GGUF