Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
Sicong commited on
Commit
faed22a
β€’
1 Parent(s): 9664cf7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +111 -3
README.md CHANGED
@@ -1,3 +1,111 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - visual-question-answering
5
+ - question-answering
6
+ language:
7
+ - en
8
+ pretty_name: Multi-source Video Captioning
9
+ size_categories:
10
+ - 1K<n<10K
11
+ ---
12
+
13
+ # Multi-source Video Captioning (MSVC) Dataset Card
14
+
15
+ ## Dataset details
16
+
17
+ **Dataset type:**
18
+ MSVC is a set of collected video captioning data. It is constructed to ensure a robust and thorough evaluation of Video-LLMs' video-captioning capabilities.
19
+
20
+ **Dataset detail:**
21
+ MSVC is introduced to address limitations in existing video caption benchmarks, MSVC samples 500 videos with human-annotated captions from [MSVD](https://www.aclweb.org/anthology/P11-1020/), [MSRVTT](http://openaccess.thecvf.com/content_cvpr_2016/papers/Xu_MSR-VTT_A_Large_CVPR_2016_paper.pdf), and [VATEX](http://openaccess.thecvf.com/content_ICCV_2019/papers/Wang_VaTeX_A_Large-Scale_High-Quality_Multilingual_Dataset_for_Video-and-Language_Research_ICCV_2019_paper.pdf), ensuring diverse scenarios and domains.
22
+ Traditional evaluation metrics rely on exact match statistics between generated and ground truth captions, which are limited in capturing the richness of video content. Thus, we use a ChatGPT-assisted evaluation similar to [VideoChatGPT](https://github.com/mbzuai-oryx/Video-ChatGPT/blob/main/quantitative_evaluation/README.md). Both generated and human-annotated captions are evaluated by GPT-3.5-turbo (0613) for Correctness of Information and Detailed Orientation.
23
+ It is worth noting that each video in the MSVC benchmark is annotated with multiple human-written captions, covering different aspects of the video. This comprehensive annotation ensures a robust and thorough evaluation of Video-LLMs.
24
+
25
+ **Data instructions**
26
+ Please download the raw videos from their official websites and arrange them in the following structure:
27
+ ```bash
28
+ VideoLLaMA2
29
+ β”œβ”€β”€ eval
30
+ β”‚ β”œβ”€β”€ MSVC
31
+ | | β”œβ”€β”€ msvd/
32
+ | | | β”œβ”€β”€ lw7pTwpx0K0_38_48.avi
33
+ | | | └── ...
34
+ | | β”œβ”€β”€ msrvtt/
35
+ | | | β”œβ”€β”€ video9921.mp4
36
+ | | | └── ...
37
+ | | β”œβ”€β”€ vatex/
38
+ | | | β”œβ”€β”€ 9giWHf6Pf24.mp4
39
+ | | | └── ...
40
+ ```
41
+
42
+
43
+ **GPT3.5 Evaluation Prompt:**
44
+ ```python
45
+ # Correctness evaluation:
46
+ {
47
+ "role": "system",
48
+ "content":
49
+ "You are an intelligent chatbot designed for evaluating the factual accuracy of generative outputs for video-based question-answer pairs. "
50
+ "Your task is to compare the predicted answer with these correct answers and determine if they are factually consistent. Here's how you can accomplish the task:"
51
+ "------"
52
+ "##INSTRUCTIONS: "
53
+ "- Focus on the factual consistency between the predicted answer and the correct answer. The predicted answer should not contain any misinterpretations or misinformation.\n"
54
+ "- The predicted answer must be factually accurate and align with the video content.\n"
55
+ "- Consider synonyms or paraphrases as valid matches.\n"
56
+ "- Evaluate the factual accuracy of the prediction compared to the answer."
57
+ },
58
+ {
59
+ "role": "user",
60
+ "content":
61
+ "Please evaluate the following video-based question-answer pair:\n\n"
62
+ f"Question: {question}\n"
63
+ f"Correct Answers: {answer}\n"
64
+ f"Predicted Answer: {pred}\n\n"
65
+ "Provide your evaluation only as a factual accuracy score where the factual accuracy score is an integer value between 0 and 5, with 5 indicating the highest level of factual consistency. "
66
+ "Please generate the response in the form of a Python dictionary string with keys 'score', where its value is the factual accuracy score in INTEGER, not STRING."
67
+ "DO NOT PROVIDE ANY OTHER OUTPUT TEXT OR EXPLANATION. Only provide the Python dictionary string. "
68
+ "For example, your response should look like this: {''score': 4.8}."
69
+ }
70
+ ```
71
+ ```python
72
+ # Detailedness evaluation:
73
+ {
74
+ "role": "system",
75
+ "content": "You are an intelligent chatbot designed for evaluating the detail orientation of generative outputs for video-based question-answer pairs. "
76
+ "Your task is to compare the predicted answer with these correct answers and determine its level of detail, considering both completeness and specificity. Here's how you can accomplish the task:"
77
+ "------"
78
+ "##INSTRUCTIONS: "
79
+ "- Check if the predicted answer covers all major points from the video. The response should not leave out any key aspects.\n"
80
+ "- Evaluate whether the predicted answer includes specific details rather than just generic points. It should provide comprehensive information that is tied to specific elements of the video.\n"
81
+ "- Consider synonyms or paraphrases as valid matches.\n"
82
+ "- Provide a single evaluation score that reflects the level of detail orientation of the prediction, considering both completeness and specificity.",
83
+ },
84
+ {
85
+ "role": "user",
86
+ "content": "Please evaluate the following video-based question-answer pair:\n\n"
87
+ f"Question: {question}\n"
88
+ f"Correct Answers: {answer}\n"
89
+ f"Predicted Answer: {pred}\n\n"
90
+ "Provide your evaluation only as a detail orientation score where the detail orientation score is an integer value between 0 and 5, with 5 indicating the highest level of detail orientation. "
91
+ "Please generate the response in the form of a Python dictionary string with keys 'score', where its value is the detail orientation score in INTEGER, not STRING."
92
+ "DO NOT PROVIDE ANY OTHER OUTPUT TEXT OR EXPLANATION. Only provide the Python dictionary string. "
93
+ "For example, your response should look like this: {''score': 4.8}.",
94
+ }
95
+ ```
96
+
97
+ **Dataset date:**
98
+ MSVC was released in June 2024.
99
+
100
+ **Paper or resources for more information:**
101
+ https://github.com/DAMO-NLP-SG/VideoLLaMA2
102
+
103
+ **Where to send questions or comments about the model:**
104
+ https://github.com/DAMO-NLP-SG/VideoLLaMA2/issues
105
+
106
+ ## Intended use
107
+ **Primary intended uses:**
108
+ The primary use of MSVC is research on Video-LLMs.
109
+
110
+ **Primary intended users:**
111
+ The primary intended users of the dataset are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.