|
--- |
|
license: cc-by-4.0 |
|
task_categories: |
|
- visual-question-answering |
|
modalities: |
|
- Video |
|
- Text |
|
configs: |
|
- config_name: action_antonym |
|
data_files: json/action_antonym.json |
|
- config_name: action_count |
|
data_files: json/action_count.json |
|
- config_name: action_localization |
|
data_files: json/action_localization.json |
|
- config_name: action_sequence |
|
data_files: json/action_sequence.json |
|
- config_name: egocentric_sequence |
|
data_files: json/egocentric_sequence.json |
|
- config_name: moving_direction |
|
data_files: json/moving_direction.json |
|
- config_name: object_count |
|
data_files: json/object_count.json |
|
- config_name: object_shuffle |
|
data_files: json/object_shuffle.json |
|
- config_name: scene_transition |
|
data_files: json/scene_transition.json |
|
- config_name: unexpected_action |
|
data_files: json/unexpected_action.json |
|
language: |
|
- en |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
|
|
<div align="center"> |
|
|
|
<h1><a style="color:blue" href="https://daniel-cores.github.io/tvbench/">TVBench: Redesigning Video-Language Evaluation</a></h1> |
|
|
|
[Daniel Cores](https://scholar.google.com/citations?user=pJqkUWgAAAAJ)\*, |
|
[Michael Dorkenwald](https://scholar.google.com/citations?user=KY5nvLUAAAAJ)\*, |
|
[Manuel Mucientes](https://scholar.google.com.vn/citations?user=raiz6p4AAAAJ), |
|
[Cees G. M. Snoek](https://scholar.google.com/citations?user=0uKdbscAAAAJ), |
|
[Yuki M. Asano](https://scholar.google.co.uk/citations?user=CdpLhlgAAAAJ) |
|
|
|
*Equal contribution. |
|
[![arXiv](https://img.shields.io/badge/cs.CV-2410.07752-b31b1b?logo=arxiv&logoColor=red)](https://arxiv.org/abs/2410.07752) |
|
[![GitHub](https://img.shields.io/badge/GitHub-TVBench-blue?logo=github)](https://github.com/daniel-cores/tvbench) |
|
[![Static Badge](https://img.shields.io/badge/website-TVBench-8A2BE2)](https://daniel-cores.github.io/tvbench/) |
|
|
|
</div> |
|
|
|
### Updates |
|
- <h4 style="color:red">25 October 2024: Revised annotations for Action Sequence and removed duplicate samples for Action Sequence and Unexpected Action.</h4> |
|
|
|
# TVBench |
|
TVBench is a new benchmark specifically created to evaluate temporal understanding in video QA. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative. |
|
|
|
We defined 10 temporally challenging tasks that either require repetition counting (Action Count), properties about moving objects (Object Shuffle, Object Count, Moving Direction), temporal localization (Action Localization, Unexpected Action), temporal sequential ordering (Action Sequence, Scene Transition, Egocentric Sequence) and distinguishing between temporally hard Action Antonyms such as "Standing up" and "Sitting down". |
|
|
|
In TVBench, state-of-the-art text-only, image-based, and most video-language models perform close to random chance, with only the latest strong temporal models, such as Tarsier, outperforming the random baseline. In contrast to MVBench, the performance of these temporal models significantly drops when videos are reversed. |
|
|
|
![image](figs/fig1.png) |
|
|
|
### Dataset statistics: |
|
The table below shows the number of samples and the average frame length for each task in TVBench. |
|
|
|
<center> |
|
<img src="figs/tvbench_stats.png" alt="drawing" width="400"/> |
|
</center> |
|
|
|
## Download |
|
Question and answers are provided as a json file for each task. |
|
|
|
Videos in TVBench are sourced from Perception Test, CLEVRER, STAR, MoVQA, Charades-STA, NTU RGB+D, FunQA and CSV. All videos are included in this repository, except for those from NTU RGB+D, which can be downloaded from the official [website](https://rose1.ntu.edu.sg/dataset/actionRecognition/). It is not necessary to download the full dataset, as NTU RGB+D provides a subset specifically for TVBench with the required videos. These videos are required by th Action Antonym task and should be stored in the `video/action_antonym` folder. |
|
|
|
## Leaderboard |
|
![image](figs/sota.png) |
|
|
|
# Citation |
|
If you find this benchmark useful, please consider citing: |
|
``` |
|
|
|
@misc{cores2024tvbench, |
|
author = {Daniel Cores and Michael Dorkenwald and Manuel Mucientes and Cees G. M. Snoek and Yuki M. Asano}, |
|
title = {TVBench: Redesigning Video-Language Evaluation}, |
|
year = {2024}, |
|
eprint = {arXiv:2410.07752}, |
|
} |
|
|
|
``` |
|
|