Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ size_categories:
|
|
18 |
MSVC is a set of collected video captioning data. It is constructed to ensure a robust and thorough evaluation of Video-LLMs' video-captioning capabilities.
|
19 |
|
20 |
**Dataset detail:**
|
21 |
-
MSVC is introduced to address limitations in existing video caption benchmarks, MSVC samples 500 videos with human-annotated captions from [MSVD](https://www.aclweb.org/anthology/P11-1020/), [MSRVTT](http://openaccess.thecvf.com/content_cvpr_2016/papers/Xu_MSR-VTT_A_Large_CVPR_2016_paper.pdf), and [VATEX](http://openaccess.thecvf.com/content_ICCV_2019/papers/Wang_VaTeX_A_Large-Scale_High-Quality_Multilingual_Dataset_for_Video-and-Language_Research_ICCV_2019_paper.pdf), ensuring diverse scenarios and domains.
|
22 |
Traditional evaluation metrics rely on exact match statistics between generated and ground truth captions, which are limited in capturing the richness of video content. Thus, we use a ChatGPT-assisted evaluation similar to [VideoChatGPT](https://github.com/mbzuai-oryx/Video-ChatGPT/blob/main/quantitative_evaluation/README.md). Both generated and human-annotated captions are evaluated by GPT-3.5-turbo (0613) for Correctness of Information and Detailed Orientation.
|
23 |
It is worth noting that each video in the MSVC benchmark is annotated with multiple human-written captions, covering different aspects of the video. This comprehensive annotation ensures a robust and thorough evaluation of Video-LLMs.
|
24 |
|
|
|
18 |
MSVC is a set of collected video captioning data. It is constructed to ensure a robust and thorough evaluation of Video-LLMs' video-captioning capabilities.
|
19 |
|
20 |
**Dataset detail:**
|
21 |
+
MSVC is introduced to address limitations in existing video caption benchmarks, MSVC samples a total of 1,500 videos with human-annotated captions from [MSVD](https://www.aclweb.org/anthology/P11-1020/), [MSRVTT](http://openaccess.thecvf.com/content_cvpr_2016/papers/Xu_MSR-VTT_A_Large_CVPR_2016_paper.pdf), and [VATEX](http://openaccess.thecvf.com/content_ICCV_2019/papers/Wang_VaTeX_A_Large-Scale_High-Quality_Multilingual_Dataset_for_Video-and-Language_Research_ICCV_2019_paper.pdf), ensuring diverse scenarios and domains.
|
22 |
Traditional evaluation metrics rely on exact match statistics between generated and ground truth captions, which are limited in capturing the richness of video content. Thus, we use a ChatGPT-assisted evaluation similar to [VideoChatGPT](https://github.com/mbzuai-oryx/Video-ChatGPT/blob/main/quantitative_evaluation/README.md). Both generated and human-annotated captions are evaluated by GPT-3.5-turbo (0613) for Correctness of Information and Detailed Orientation.
|
23 |
It is worth noting that each video in the MSVC benchmark is annotated with multiple human-written captions, covering different aspects of the video. This comprehensive annotation ensures a robust and thorough evaluation of Video-LLMs.
|
24 |
|