license: mit
Dataset Card for TimeIT
TimeIT encompasses 6 longstanding timestamp-related video tasks and incorporates 12 specific datasets derived from different domains.
Dataset Description
- Homepage: https://huggingface.co/datasets/ShuhuaiRen/TimeIT
- Repository: https://huggingface.co/datasets/ShuhuaiRen/TimeIT
- Paper: https://arxiv.org/abs/2312.02051
- Leaderboard:
- Point of Contact:
Dataset Statistics
Our dataset compiles diverse tasks of time-sensitive long video understanding, including Dense Video Captioning, Video Grounding, Video Summarization, Video Highlight Detection, Step Localization, Transcribed Speech Generation.
Instruction Statistics
Task | #Instructions |
---|---|
Dense Video Captioning | |
Temporal Video Grounding | |
Video Summarization | |
Video Highlight Detection | |
Step Localization | |
Transcribed Speech Generation | |
Total |
Task Statistics
Task | Description | #Train | #Val | #Test |
---|---|---|---|---|
Dense Video Captioning | detects a series of events in the given video and outputs the corresponding timestamps and descriptions | |||
Temporal Video Grounding | predict a timestamp boundary including the start and end time in the video given a natural language query | |||
Video Summarization | create a compressed set of frames or clip shots to represent the most informative content of the given video | |||
Video Highlight Detection | identify the most exciting, impressive, or emotional moments that may not cover the full scope of the original video | |||
Step Localization | segment and describe significant steps in a long untrimmed video | |||
Transcribed Speech Generation | predict the speech content and its corresponding start and end timestamps based on visual signals in the video | |||
Total | - |
Detailed Dataset Statistics
Task | Dataset | #Train | #Val | #Test |
---|---|---|---|---|
Dense Video Captioning | ActivityNet Captions |
|||
ViTT |
97,765 | 13,965 | 0 | |
YouCook2 |
14,575 | 2,487 | 2,489 | |
Temporal Video Grounding | DiDeMo |
30,000 | 2,000 | 0 |
QuerYD |
118,312 | 27,550 | 0 | |
HiREST_grounding |
30,000 | 50,000 | 0 | |
Charades-STA |
30,000 | 5,000 | 5,000 | |
Video Summarization | TVSum |
30,000 | 30,000 | 0 |
SumMe |
13,568 | 1,024 | 1,024 | |
Video Highlight Detection | QVHighlights |
9,009 | 5,046 | 0 |
Step Localization | COIN |
30,000 | 2,000 | 0 |
HiREST_step |
29,372 | 2,000 | 0 | |
Transcribed Speech Generation | YT-Temporal |
5,000 | 4,315 | 4,350 |
Dataset Structure
HuggingFace Login (Optional)
# OR run huggingface-cli login
from huggingface_hub import login
hf_token = "hf_xxx" # TODO: set a valid HuggingFace access token for loading datasets/models
login(token=hf_token)
Data Loading
from datasets import load_dataset
ds_name = "youcook2" # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
Data Splits
from datasets import load_dataset
ds_name = "youcook2" # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
train_set = dataset["train"]
Data Instances
from datasets import load_dataset
from io import BytesIO
from base64 import b64decode
from PIL import Image
ds_name = "youcook2" # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
train_set = dataset["train"]
for train_instance in train_set:
question = train_instance["QA"][0]['q'] # str
answer = train_instance["QA"][0]['a'] # str
video_path = train_instance["video"] # str
Data Fields
import datasets
features = datasets.Features(
{
"instruction": datasets.Value("string"),
"inputs": datasets.Value("string"),
"image_base64_str": [datasets.Value("string")],
"outputs": datasets.Value("string"),
}
)
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Task | Dataset [Citation] | Source |
---|---|---|
Image Captioning | coco [1] |
Source |
textcap [2] |
Source | |
image-paragraph-captioning [3] |
Source | |
Classification | coco-goi [1] |
Source |
coco-text [4] |
Source | |
imagenet [5] |
Source | |
coco-itm [1] |
Source | |
snli-ve [6] |
Source | |
mocheg [7] |
Source | |
iqa [8] |
Source | |
Visual Question Answering | vqa-v2 [9] |
Source |
shapes [10] |
Source | |
docvqa [11] |
Source | |
ocr-vqa [12] |
Source | |
st-vqa [13] |
Source | |
text-vqa [14] |
Source | |
gqa [15] |
Source | |
Knowledgeable Visual QA | okvqa [16] |
Source |
a-okvqa [17] |
Source | |
science-qa [18] |
Source | |
viquae [19] |
Source | |
Reasoning | clevr [20] |
Source |
nlvr [21] |
Source | |
vcr [22] |
Source | |
visual-mrc [23] |
Source | |
winoground [24] |
Source | |
Generation | vist [25] |
Source |
visual-dialog [26] |
Source | |
multi30k [27] |
Source | |
Chinese | fm-iqa [28] |
Source |
coco-cn [29] |
Source | |
flickr8k-cn [30] |
Source | |
chinese-food [31] |
Source | |
mmchat [32] |
Source | |
Video | ss [33] |
Source |
ivqa [34] |
Source | |
msvd-qa [35] |
Source | |
activitynet-qa [36] |
Source | |
msrvtt [35] |
Source | |
msrvtt-qa [37] |
Source |
Annotations
Annotation process
To build high-quality multimodal instruction datasets, we rewrite various datasets into multimodal-to-text dialog format. The annotation process includes four steps:
- (1) Stage I: Instruction Writing: writing instructions for each task;
- (2) Stage II: Data Format Unification: structuring images and texts into a unified schema;
- (3) Stage III: Quality Check: checking the overall dataset quality;
- (4) Stage IV: Key Datasets Translation: building multilingual sets.
Who are the annotators?
Three authors of this work are employed as human annotators, each of whom is a graduate student familiar with relevant literature.
Additional Information
Licensing Information
The content of original dataset follows their original license. We suggest that for the task with Unknown/Custom license, the user can check the original project or contact the dataset owner for detailed license information.
Our annotated instruction data is licensed under CC BY 4.0.
Citation Information
@article{Ren2023TimeChatAT,
title={TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding},
author={Shuhuai Ren and Linli Yao and Shicheng Li and Xu Sun and Lu Hou},
journal={ArXiv},
year={2023},
volume={abs/2312.02051},
}
Contributions
TimeIT is a video-centric instruction-tuning dataset involving timestamps. designed to enable the development of general-purpose video agents.
References
- [1] Microsoft COCO: Common Objects in Context
- [2] TextCaps: a dataset for image captioning with reading comprehension
- [3] A Hierarchical Approach for Generating Descriptive Image Paragraphs
- [4] COCO-Text: Dataset and benchmark for text detection and recognition in natural images
- [5] Imagenet large scale visual recognition challenge
- [6] E-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
- [7] End-to-End Multimodal Fact-Checking and Explanation Generation: A Challenging Dataset and Models
- [8] Quantifying visual image quality: A Bayesian view
- [9] Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
- [10] Neural Module Networks
- [11] DocVQA: A dataset for vqa on document images
- [12] OCR-VQA: Visual Question Answering by Reading Text in Images
- [13] Scene Text Visual Question Answering
- [14] Towards VQA Models That Can Read
- [15] GQA: A new dataset for real-world visual reasoning and compositional question answering
- [16] OK-VQA: A Visual Question Answering Benchmark Requiring External Knowledge
- [17] A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge
- [18] Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
- [19] ViQuAE: a dataset for knowledge-based visual question answering about named entities
- [20] CLEVR: A diagnostic dataset for compositional language and elementary visual reasoning
- [21] A Corpus of Natural Language for Visual Reasoning
- [22] From recognition to cognition: Visual Commonsense Reasoning
- [23] VisualMRC: Machine reading comprehension on document images
- [24] WinoGround: Probing vision and language models for visio-linguistic compositionality
- [25] Visual Storytelling
- [26] Visual Dialog
- [27] Multi30k: Multilingual english-german image descriptions
- [28] Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question
- [29] COCO-CN for cross-lingual image tagging, captioning, and retrieval
- [30] Adding Chinese Captions to Images
- [31] ChineseFoodNet: A large-scale image dataset for chinese food recognition
- [32] MMChat: Multi-Modal Chat Dataset on Social Media
- [33] The "Something Something" Video Database for Learning and Evaluating Visual Common Sense
- [34] Just Ask: Learning to answer questions from millions of narrated videos
- [35] Video Question Answering via Gradually Refined Attention over Appearance and Motion
- [36] ActivityNet-qa: A dataset for understanding complex web videos via question answering
- [37] MSR-VTT: A large video description dataset for bridging video and language