Datasets:

Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
dcores commited on
Commit
d9cdef6
2 Parent(s): 8a602dd ecaa679

annotations revised

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -31,8 +31,28 @@ language:
31
  size_categories:
32
  - 1K<n<10K
33
  ---
 
34
  #### Updates
35
  - 25 October 2024: Revised annotations for Action Sequence and removed duplicate samples for Action Sequence and Unexpected Action.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  # TVBench
38
  TVBench is a new benchmark specifically created to evaluate temporal understanding in video QA. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative.
 
31
  size_categories:
32
  - 1K<n<10K
33
  ---
34
+ <<<<<<< HEAD
35
  #### Updates
36
  - 25 October 2024: Revised annotations for Action Sequence and removed duplicate samples for Action Sequence and Unexpected Action.
37
+ =======
38
+ <div align="center">
39
+
40
+ <h2><a href="https://daniel-cores.github.io/tvbench/">TVBench: Redesigning Video-Language Evaluation</a></h2>
41
+
42
+ [Daniel Cores](https://scholar.google.com/citations?user=pJqkUWgAAAAJ)\*,
43
+ [Michael Dorkenwald](https://scholar.google.com/citations?user=KY5nvLUAAAAJ)\*,
44
+ [Manuel Mucientes](https://scholar.google.com.vn/citations?user=raiz6p4AAAAJ),
45
+ [Cees G. M. Snoek](https://scholar.google.com/citations?user=0uKdbscAAAAJ),
46
+ [Yuki M. Asano](https://scholar.google.co.uk/citations?user=CdpLhlgAAAAJ)
47
+
48
+ *Equal contribution.
49
+ [![arXiv](https://img.shields.io/badge/cs.CV-2410.07752-b31b1b?logo=arxiv&logoColor=red)](https://arxiv.org/abs/2410.07752)
50
+ [![GitHub](https://img.shields.io/badge/GitHub-TVBench-blue?logo=github)](https://github.com/daniel-cores/tvbench)
51
+ [![Static Badge](https://img.shields.io/badge/website-TVBench-8A2BE2)](https://daniel-cores.github.io/tvbench/)
52
+
53
+ </div>
54
+
55
+ >>>>>>> ecaa679bce8fb253643d5077f2d2fa5b7854146d
56
 
57
  # TVBench
58
  TVBench is a new benchmark specifically created to evaluate temporal understanding in video QA. We identified three main issues in existing datasets: (i) static information from single frames is often sufficient to solve the tasks (ii) the text of the questions and candidate answers is overly informative, allowing models to answer correctly without relying on any visual input (iii) world knowledge alone can answer many of the questions, making the benchmarks a test of knowledge replication rather than visual reasoning. In addition, we found that open-ended question-answering benchmarks for video understanding suffer from similar issues while the automatic evaluation process with LLMs is unreliable, making it an unsuitable alternative.