license: cc-by-4.0
language:
- en
Dataset Card for TimeIT
TimeIT encompasses 6 longstanding timestamp-related video tasks and incorporates 12 specific datasets derived from different domains.
Dataset Description
- Homepage: https://huggingface.co/datasets/ShuhuaiRen/TimeIT
- Repository: https://huggingface.co/datasets/ShuhuaiRen/TimeIT
- Paper: https://arxiv.org/abs/2312.02051
- Leaderboard:
- Point of Contact:
Dataset Statistics
Our dataset compiles diverse tasks of time-sensitive long video understanding, including Dense Video Captioning, Video Grounding, Video Summarization, Video Highlight Detection, Step Localization, Transcribed Speech Generation.
Instruction Statistics
Task | #Instructions |
---|---|
Dense Video Captioning | 6 |
Temporal Video Grounding | 6 |
Video Summarization | 6 |
Video Highlight Detection | 6 |
Step Localization | 6 |
Transcribed Speech Generation | 6 |
Total | 36 |
Task Statistics
Task | Description | #Train |
---|---|---|
Dense Video Captioning | detects a series of events in the given video and outputs the corresponding timestamps and descriptions | 16,342 |
Temporal Video Grounding | predict a timestamp boundary including the start and end time in the video given a natural language query | 60,471 |
Video Summarization | create a compressed set of frames or clip shots to represent the most informative content of the given video | 75 |
Video Highlight Detection | identify the most exciting, impressive, or emotional moments that may not cover the full scope of the original video | 6,858 |
Step Localization | segment and describe significant steps in a long untrimmed video | 9,488 |
Transcribed Speech Generation | predict the speech content and its corresponding start and end timestamps based on visual signals in the video | 31,627 |
Total | - | 124861 |
Detailed Dataset Statistics
Task | Dataset | #Train |
---|---|---|
Dense Video Captioning | ActivityNet Captions |
10,009 |
ViTT |
5,141 | |
YouCook2 |
1,192 | |
Temporal Video Grounding | DiDeMo |
33,002 |
QuerYD |
14,602 | |
HiREST_grounding |
459 | |
Charades-STA |
12,408 | |
Video Summarization | TVSum |
50 |
SumMe |
25 | |
Video Highlight Detection | QVHighlights |
6,858 |
Step Localization | COIN |
9,029 |
HiREST_step |
459 | |
Transcribed Speech Generation | YT-Temporal |
31,627 |
Dataset Structure
HuggingFace Login (Optional)
# OR run huggingface-cli login
from huggingface_hub import login
hf_token = "hf_xxx" # TODO: set a valid HuggingFace access token for loading datasets/models
login(token=hf_token)
Data Loading
from datasets import load_dataset
ds_name = "youcook2" # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
Data Splits
from datasets import load_dataset
ds_name = "youcook2" # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
train_set = dataset["train"]
Data Instances
from datasets import load_dataset
from io import BytesIO
from base64 import b64decode
from PIL import Image
ds_name = "youcook2" # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
train_set = dataset["train"]
for train_instance in train_set:
question = train_instance["question"] # str
answer = train_instance["answer"] # str
video_path = train_instance["video_path"] # str
Data Fields
import datasets
features = datasets.Features(
{
"video_path": datasets.Value("string"),
"question": datasets.Value("string"),
"answer": datasets.Value("string"),
}
)
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Task | Dataset [Citation] | Source |
---|---|---|
Dense Video Captioning | ActivityNet Captions [1] |
Source |
ViTT [2] |
Source | |
YouCook2 [3] |
Source | |
Temporal Video Grounding | DiDeMo [4] |
Source |
QuerYD [5] |
Source | |
HiREST_grounding [6] |
Source | |
Charades-STA [7] |
Source | |
Video Summarization | TVSum [8] |
Source |
SumMe [9] |
Source | |
Video Highlight Detection | QVHighlights [10] |
Source |
Step Localization | COIN [11] |
Source |
HiREST_step [6] |
Source | |
Transcribed Speech Generation | YT-Temporal [12] |
Source |
Annotations
Annotation process
To build high-quality multimodal instruction datasets, we rewrite various datasets into multimodal-to-text dialog format. The annotation process includes four steps:
- (1) Stage I: Instruction Writing: writing instructions for each task;
- (2) Stage II: Data Format Unification: structuring images and texts into a unified schema;
- (3) Stage III: Quality Check: checking the overall dataset quality;
- (4) Stage IV: Key Datasets Translation: building multilingual sets.
Who are the annotators?
Three authors of this work are employed as human annotators, each of whom is a graduate student familiar with relevant literature.
Additional Information
Licensing Information
The content of original dataset follows their original license. We suggest that for the task with Unknown/Custom license, the user can check the original project or contact the dataset owner for detailed license information.
Our annotated instruction data is licensed under CC BY 4.0.
Citation Information
@article{Ren2023TimeChatAT,
title={TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding},
author={Shuhuai Ren and Linli Yao and Shicheng Li and Xu Sun and Lu Hou},
journal={ArXiv},
year={2023},
volume={abs/2312.02051},
}
Contributions
TimeIT is a video-centric instruction-tuning dataset involving timestamps. designed to enable the development of general-purpose video agents.
References
- [1] Dense-Captioning Events in Videos
- [2] Multimodal Pretraining for Dense Video Captioning
- [3] Towards Automatic Learning of Procedures from Web Instructional Videos
- [4] Localizing Moments in Video with Natural Language
- [5] QuerYD: A video dataset with high-quality text and audio narrations
- [6] Hierarchical Video-Moment Retrieval and Step-Captioning
- [7] TALL: Temporal Activity Localization via Language Query
- [8] TVSum: Summarizing Web Videos Using Titles
- [9] Creating Summaries from User Videos
- [10] QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries
- [11] COIN: A Large-scale Dataset for Comprehensive Instructional Video Analysis
- [12] MERLOT: Multimodal Neural Script Knowledge Models