Edit model card

Insight-V-Summary

Model Summary

The Insight-V models are 7B parameter models based on Qwen2.5 language model with a context window of 32K tokens.

Insight-V offers 1) a scalable data generation pipeline for long-chain, high-quality reasoning data, 2) a multi-agent system that decomposes visual reasoning tasks into reasoning and summarization, and 3) a two-stage training pipeline to enhance visual reasoning capabilities. Together, these contributions address key challenges in visual reasoning, providing a solid foundation for future research in MLLM reasoning.

Model Architecture

  • Architecture: Pre-trained Oryx-ViT + Qwen2.5-7B
  • Data: a mixture of 1.2M image-text data
  • Precision: BFloat16

Hardware & Software

  • Hardware: 64 * NVIDIA Tesla A100
  • Orchestration: HuggingFace Trainer
  • Code: Pytorch

Citation

Downloads last month
8
Safetensors
Model size
8.06B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for THUdyh/Insight-V-Summary

Base model

Qwen/Qwen2.5-7B
Finetuned
(80)
this model

Collection including THUdyh/Insight-V-Summary