Datasets:

Languages:
English
ArXiv:
License:
TimeIT / README.md
ShuhuaiRen's picture
Upload README.md
aa09e07
|
raw
history blame
9.78 kB
---
license: cc-by-4.0
language:
- en
---
# Dataset Card for TimeIT
TimeIT encompasses 6 longstanding timestamp-related video tasks and incorporates 12 specific datasets derived from different domains.
**[NOTE]: Please refer to [DATA.md](https://github.com/RenShuhuai-Andy/TimeChat/blob/master/docs/DATA.md) for more details on downloading and processing video data.**
## Dataset Description
- **Homepage: https://huggingface.co/datasets/ShuhuaiRen/TimeIT**
- **Repository: https://huggingface.co/datasets/ShuhuaiRen/TimeIT**
- **Paper: https://arxiv.org/abs/2312.02051**
- **Leaderboard:**
- **Point of Contact:**
## Dataset Statistics
Our dataset compiles diverse tasks of time-sensitive long video understanding, including Dense Video Captioning, Video Grounding, Video Summarization, Video Highlight Detection, Step Localization, Transcribed Speech Generation.
### Instruction Statistics
| Task | #Instructions |
|-------------------------------|---------------|
| Dense Video Captioning | 6 |
| Temporal Video Grounding | 6 |
| Video Summarization | 6 |
| Video Highlight Detection | 6 |
| Step Localization | 6 |
| Transcribed Speech Generation | 6 |
| Total | 36 |
### Task Statistics
| Task | Description | #Train |
|-------------------------------|----------------------------------------------------------------------------------------------------------------------|---------|
| Dense Video Captioning | detects a series of events in the given video and outputs the corresponding timestamps and descriptions | 16,342 |
| Temporal Video Grounding | predict a timestamp boundary including the start and end time in the video given a natural language query | 60,471 |
| Video Summarization | create a compressed set of frames or clip shots to represent the most informative content of the given video | 75 |
| Video Highlight Detection | identify the most exciting, impressive, or emotional moments that may not cover the full scope of the original video | 6,858 |
| Step Localization | segment and describe significant steps in a long untrimmed video | 9,488 |
| Transcribed Speech Generation | predict the speech content and its corresponding start and end timestamps based on visual signals in the video | 31,627 |
| Total | - | 124861 |
### Detailed Dataset Statistics
| Task | Dataset | #Train |
|-------------------------------|------------------------|--------|
| Dense Video Captioning | `ActivityNet Captions` | 10,009 |
| | `ViTT` | 5,141 |
| | `YouCook2` | 1,192 |
| Temporal Video Grounding | `DiDeMo` | 33,002 |
| | `QuerYD` | 14,602 |
| | `HiREST_grounding` | 459 |
| | `Charades-STA` | 12,408 |
| Video Summarization | `TVSum` | 50 |
| | `SumMe` | 25 |
| Video Highlight Detection | `QVHighlights` | 6,858 |
| Step Localization | `COIN` | 9,029 |
| | `HiREST_step` | 459 |
| Transcribed Speech Generation | `YT-Temporal` | 31,627 |
## Dataset Structure
### HuggingFace Login (Optional)
```python
# OR run huggingface-cli login
from huggingface_hub import login
hf_token = "hf_xxx" # TODO: set a valid HuggingFace access token for loading datasets/models
login(token=hf_token)
```
### Data Loading
```python
from datasets import load_dataset
ds_name = "youcook2" # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
```
### Data Splits
```python
from datasets import load_dataset
ds_name = "youcook2" # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
train_set = dataset["train"]
```
### Data Instances
```python
from datasets import load_dataset
ds_name = "youcook2" # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
train_set = dataset["train"]
for train_instance in train_set:
question = train_instance["question"] # str
answer = train_instance["answer"] # str
video_path = train_instance["video_path"] # str
```
### Data Fields
```python
import datasets
features = datasets.Features(
{
"video_path": datasets.Value("string"),
"question": datasets.Value("string"),
"answer": datasets.Value("string"),
}
)
```
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
| Task | Dataset [Citation] | Source |
|-------------------------------|----------------------------|------------------------------------------------------------------------------------|
| Dense Video Captioning | `ActivityNet Captions` [1] | [Source](http://activity-net.org/download.html) |
| | `ViTT` [2] | [Source](https://github.com/google-research-datasets/Video-Timeline-Tags-ViTT) |
| | `YouCook2` [3] | [Source](http://youcook2.eecs.umich.edu/) |
| Temporal Video Grounding | `DiDeMo` [4] | [Source](https://github.com/LisaAnne/LocalizingMoments?tab=readme-ov-file#dataset) |
| | `QuerYD` [5] | [Source](https://www.robots.ox.ac.uk/~vgg/data/queryd/) |
| | `HiREST_grounding` [6] | [Source](https://github.com/j-min/HiREST) |
| | `Charades-STA` [7] | [Source](https://github.com/jiyanggao/TALL) |
| Video Summarization | `TVSum` [8] | [Source](https://github.com/yalesong/tvsum) |
| | `SumMe` [9] | [Source](http://classif.ai/dataset/ethz-cvl-video-summe/) |
| Video Highlight Detection | `QVHighlights` [10] | [Source](https://github.com/jayleicn/moment_detr/tree/main/data) |
| Step Localization | `COIN` [11] | [Source](https://github.com/coin-dataset/annotations) |
| | `HiREST_step` [6] | [Source](https://github.com/j-min/HiREST) |
| Transcribed Speech Generation | `YT-Temporal` [12] | [Source](https://rowanzellers.com/merlot/#data) |
### Annotations
#### Annotation process
To build high-quality multimodal instruction datasets,
we rewrite various datasets into multimodal-to-text dialog format.
The annotation process includes four steps:
- (1) **Stage I: Instruction Writing**: writing instructions for each task;
- (2) **Stage II: Data Format Unification**: structuring images and texts into a unified schema;
- (3) **Stage III: Quality Check**: checking the overall dataset quality;
- (4) **Stage IV: Key Datasets Translation**: building multilingual sets.
#### Who are the annotators?
Three authors of this work are employed as human annotators,
each of whom is a graduate student familiar with relevant literature.
## Additional Information
### Licensing Information
The content of original dataset follows their original license.
We suggest that for the task with Unknown/Custom license, the user can check the original project or contact the dataset owner for detailed license information.
Our annotated instruction data is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
### Citation Information
```bibtex
@article{Ren2023TimeChat,
title={TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding},
author={Shuhuai Ren and Linli Yao and Shicheng Li and Xu Sun and Lu Hou},
journal={ArXiv},
year={2023},
volume={abs/2312.02051},
}
```
### Contributions
TimeIT is a video-centric instruction-tuning dataset involving timestamps.
designed to enable the development of general-purpose video agents.
## References
- [1] Dense-Captioning Events in Videos
- [2] Multimodal Pretraining for Dense Video Captioning
- [3] Towards Automatic Learning of Procedures from Web Instructional Videos
- [4] Localizing Moments in Video with Natural Language
- [5] QuerYD: A video dataset with high-quality text and audio narrations
- [6] Hierarchical Video-Moment Retrieval and Step-Captioning
- [7] TALL: Temporal Activity Localization via Language Query
- [8] TVSum: Summarizing Web Videos Using Titles
- [9] Creating Summaries from User Videos
- [10] QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries
- [11] COIN: A Large-scale Dataset for Comprehensive Instructional Video Analysis
- [12] MERLOT: Multimodal Neural Script Knowledge Models