Datasets:
File size: 6,320 Bytes
4501d2f 5afbe9d 4501d2f 5afbe9d 4501d2f 5afbe9d 4501d2f 24be6f8 4501d2f 5817d0f 4501d2f a979de8 4501d2f 5afbe9d 24be6f8 5afbe9d 24be6f8 5817d0f a979de8 4501d2f 188e1bb ecaa679 004747a ecaa679 004747a 8a602dd 4501d2f 680ca28 4501d2f 8a602dd 4501d2f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
---
language:
- en
license: cc-by-4.0
size_categories:
- 1K<n<10K
task_categories:
- visual-question-answering
modalities:
- Video
- Text
configs:
- config_name: action_antonym
data_files:
- split: train
path: action_antonym/train-*
- config_name: action_count
data_files:
- split: train
path: action_count/train-*
- config_name: action_localization
data_files:
- split: train
path: action_localization/train-*
- config_name: action_sequence
data_files:
- split: train
path: action_sequence/train-*
- config_name: egocentric_sequence
data_files: json/egocentric_sequence.json
- config_name: moving_direction
data_files: json/moving_direction.json
- config_name: object_count
data_files: json/object_count.json
- config_name: object_shuffle
data_files: json/object_shuffle.json
- config_name: scene_transition
data_files: json/scene_transition.json
- config_name: unexpected_action
data_files: json/unexpected_action.json
dataset_info:
- config_name: action_antonym
features:
- name: video
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: candidates
sequence: string
- name: video_length
dtype: int64
splits:
- name: train
num_bytes: 51780
num_examples: 320
download_size: 6963
dataset_size: 51780
- config_name: action_count
features:
- name: video
dtype: string
- name: question
dtype: string
- name: candidates
sequence: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 72611
num_examples: 536
download_size: 6287
dataset_size: 72611
- config_name: action_localization
features:
- name: video
dtype: string
- name: question
dtype: string
- name: candidates
sequence: string
- name: answer
dtype: string
- name: start
dtype: float64
- name: end
dtype: float64
- name: accurate_start
dtype: float64
- name: accurate_end
dtype: float64
splits:
- name: train
num_bytes: 47290
num_examples: 160
download_size: 12358
dataset_size: 47290
- config_name: action_sequence
features:
- name: video
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: candidates
sequence: string
- name: question_id
dtype: string
- name: start
dtype: float64
- name: end
dtype: float64
splits:
- name: train
num_bytes: 67660
num_examples: 437
download_size: 13791
dataset_size: 67660
---
<div align="center">
<h1><a style="color:blue" href="https://daniel-cores.github.io/tvbench/">TVBench: Redesigning Video-Language Evaluation</a></h1>
[Daniel Cores](https://scholar.google.com/citations?user=pJqkUWgAAAAJ)\*,
[Michael Dorkenwald](https://scholar.google.com/citations?user=KY5nvLUAAAAJ)\*,
[Manuel Mucientes](https://scholar.google.com.vn/citations?user=raiz6p4AAAAJ),
[Cees G. M. Snoek](https://scholar.google.com/citations?user=0uKdbscAAAAJ),
[Yuki M. Asano](https://scholar.google.co.uk/citations?user=CdpLhlgAAAAJ)
*Equal contribution.
[![arXiv](https://img.shields.io/badge/cs.CV-2410.07752-b31b1b?logo=arxiv&logoColor=red)](https://arxiv.org/abs/2410.07752)
[![GitHub](https://img.shields.io/badge/GitHub-TVBench-blue?logo=github)](https://github.com/daniel-cores/tvbench)
[![Static Badge](https://img.shields.io/badge/website-TVBench-8A2BE2)](https://daniel-cores.github.io/tvbench/)
</div>
### Updates
- <h4 style="color:red">25 October 2024: Revised annotations for Action Sequence and removed duplicate samples for Action Sequence and Unexpected Action.</h4>
# TVBench
TVBench is a new benchmark specifically created to evaluate temporal understanding in video QA. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative.
We defined 10 temporally challenging tasks that either require repetition counting (Action Count), properties about moving objects (Object Shuffle, Object Count, Moving Direction), temporal localization (Action Localization, Unexpected Action), temporal sequential ordering (Action Sequence, Scene Transition, Egocentric Sequence) and distinguishing between temporally hard Action Antonyms such as "Standing up" and "Sitting down".
In TVBench, state-of-the-art text-only, image-based, and most video-language models perform close to random chance, with only the latest strong temporal models, such as Tarsier, outperforming the random baseline. In contrast to MVBench, the performance of these temporal models significantly drops when videos are reversed.
![image](figs/fig1.png)
### Dataset statistics:
The table below shows the number of samples and the average frame length for each task in TVBench.
<center>
<img src="figs/tvbench_stats.png" alt="drawing" width="400"/>
</center>
## Download
Question and answers are provided as a json file for each task.
Videos in TVBench are sourced from Perception Test, CLEVRER, STAR, MoVQA, Charades-STA, NTU RGB+D, FunQA and CSV. All videos are included in this repository, except for those from NTU RGB+D, which can be downloaded from the official [website](https://rose1.ntu.edu.sg/dataset/actionRecognition/). It is not necessary to download the full dataset, as NTU RGB+D provides a subset specifically for TVBench with the required videos. These videos are required by th Action Antonym task and should be stored in the `video/action_antonym` folder.
## Leaderboard
![image](figs/sota.png)
# Citation
If you find this benchmark useful, please consider citing:
```
@misc{cores2024tvbench,
author = {Daniel Cores and Michael Dorkenwald and Manuel Mucientes and Cees G. M. Snoek and Yuki M. Asano},
title = {TVBench: Redesigning Video-Language Evaluation},
year = {2024},
eprint = {arXiv:2410.07752},
}
```
|