File size: 2,951 Bytes
05009a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
inference: false
pipeline_tag: image-text-to-text
---
<br>
<br>

# MoVA-8B Model Card

## Model details

**Model type:**
MoVA-8B is an open-source multimodal large language model (MLLM), adaptively routing and fusing task-specific vision experts with a coarse-to-fine mechanism.

Vision Encoders: OpenAI-CLIP-336px, DINOv2-giant, Co-DETR-large, SAM-huge, Vary-base, Pix2Struct-large, Deplot-base, and BiomedCLIP-base.

Base LLM: meta-llama/Meta-Llama-3-8B-Instruct

**Paper or resources for more information:**
[[Paper](https://huggingface.co/papers/2404.13046)] [[Code](https://github.com/TempleX98/MoVA)]

## Usage
You can directly utilize this model as we provide in our [[repository](https://github.com/TempleX98/MoVA)].

## License
This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses, including but not limited to the OpenAI Terms of Use for the dataset and the specific licenses for base language models for checkpoints trained using the dataset (e.g. META LLAMA 3 COMMUNITY LICENSE AGREEMENT).

## Intended use
**Primary intended uses:**
The primary use of MoVA-8B is research on multimodal models and chatbots.

**Primary intended users:**
The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

## Training dataset
- 15M diverse visual instruction tuning samples for pre-training, including DataComp-1B, ShareGPT4V-PT, Objects365, and MMC-Instruction. Please refer to our paper for more details.
- 2M high-quality instruction data for fine-tuning. We integrate several visual question answering datasets across various domains, such
as DocVQA, ChartQA, InfographicVQA, AI2D, ST-VQA, TextVQA, SynthDoG-en, Geometry3K, PGPS9K, Geo170K, VQA-RAD, and SLAKE into LLaVA-mix-665k. We also encompass equivalent comprehensive captions generated by GPT4-V.

## Evaluation dataset
We evaluate our model on a wide range of popular MLLM benchmarks.

### MultiModal Benchmark

| Name | LLM | \#Tokens | MME | MMBench | MMBench-CN | QBench | MathVista | MathVerse | POPE |
|---|---|---|---|---|---|---|---|---|---|
| MoVA-8B | Llama3-8B | 576 | 1595.8 / 347.5 | 75.3 | 67.7 | 70.8 | 37.7 | 21.4 | 89.3 |

### General & Text-oriented VQA

| Name | LLM | \#Tokens | VQAv2 | GQA | SQA | TextVQA | ChartQA | DocVQA | AI2D |
|---|---|---|---|---|---|---|---|---|---|
| MoVA-8B | Llama3-8B | 576 | 83.5 | 65.2 | 74.7 | 77.1 | 70.5 | 83.4 | 77.0 |

### Visual Grounding

 Name | LLM | \#Tokens | RefCOCO<br>(val) | RefCOCO<br>(testA) | RefCOCO<br>(testB) | RefCOCO+<br>(val) | RefCOCO+<br>(testA) | RefCOCO+<br>(testB) | RefCOCO&#8209;g<br>(val) | RefCOCO&#8209;g<br>(test) |
|---|---|---|---|---|---|---|---|---|---|---|
| MoVA-8B | Llama3-8B | 576 | 92.18 | 94.75 | 88.24 | 88.45 | 92.21 | 82.82 | 90.05 | 90.23 |