Datasets:

Languages:
English
ArXiv:
License:
File size: 9,778 Bytes
093f527
078dbbd
 
 
093f527
63566b7
 
 
 
6a23552
 
aa09e07
63566b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
078dbbd
 
 
 
 
 
 
63566b7
 
 
078dbbd
 
 
 
 
 
 
 
 
63566b7
 
 
078dbbd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63566b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
078dbbd
 
 
63566b7
 
 
 
 
 
 
 
 
078dbbd
 
 
63566b7
 
 
 
 
 
 
 
 
 
 
 
d2a3451
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63566b7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a23552
63566b7
 
 
 
 
 
 
 
 
 
 
 
 
 
078dbbd
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
---
license: cc-by-4.0
language:
- en
---


# Dataset Card for TimeIT

TimeIT encompasses 6 longstanding timestamp-related video tasks and incorporates 12 specific datasets derived from different domains. 

**[NOTE]: Please refer to [DATA.md](https://github.com/RenShuhuai-Andy/TimeChat/blob/master/docs/DATA.md) for more details on downloading and processing video data.**

## Dataset Description


- **Homepage: https://huggingface.co/datasets/ShuhuaiRen/TimeIT** 
- **Repository: https://huggingface.co/datasets/ShuhuaiRen/TimeIT** 
- **Paper: https://arxiv.org/abs/2312.02051** 
- **Leaderboard:** 
- **Point of Contact:** 

## Dataset Statistics

Our dataset compiles diverse tasks of time-sensitive long video understanding, including Dense Video Captioning, Video Grounding, Video Summarization, Video Highlight Detection, Step Localization, Transcribed Speech Generation.

### Instruction Statistics

| Task                          | #Instructions |
|-------------------------------|---------------|
| Dense Video Captioning        | 6             |
| Temporal Video Grounding      | 6             |
| Video Summarization           | 6             |
| Video Highlight Detection     | 6             |
| Step Localization             | 6             |
| Transcribed Speech Generation | 6             |
| Total                         | 36            |

### Task Statistics

| Task                          | Description                                                                                                          | #Train  |
|-------------------------------|----------------------------------------------------------------------------------------------------------------------|---------|
| Dense Video Captioning        | detects a series of events in the given video and outputs the corresponding timestamps and descriptions              | 16,342  |
| Temporal Video Grounding      | predict a timestamp boundary including the start and end time in the video given a natural language query            | 60,471  |
| Video Summarization           | create a compressed set of frames or clip shots to represent the most informative content of the given video         | 75      |
| Video Highlight Detection     | identify the most exciting, impressive, or emotional moments that may not cover the full scope of the original video | 6,858   |
| Step Localization             | segment and describe significant steps in a long untrimmed video                                                     | 9,488   |
| Transcribed Speech Generation | predict the speech content and its corresponding start and end timestamps based on visual signals in the video       | 31,627  |
| Total                         | -                                                                                                                    | 124861  |

### Detailed Dataset Statistics

| Task                          | Dataset                | #Train |
|-------------------------------|------------------------|--------|
| Dense Video Captioning        | `ActivityNet Captions` | 10,009 |
|                               | `ViTT`                 | 5,141  |
|                               | `YouCook2`             | 1,192  |
| Temporal Video Grounding      | `DiDeMo`               | 33,002 |
|                               | `QuerYD`               | 14,602 |
|                               | `HiREST_grounding`     | 459    |
|                               | `Charades-STA`         | 12,408 |
| Video Summarization           | `TVSum`                | 50     |
|                               | `SumMe`                | 25     |
| Video Highlight Detection     | `QVHighlights`         | 6,858  |
| Step Localization             | `COIN`                 | 9,029  |
|                               | `HiREST_step`          | 459    |
| Transcribed Speech Generation | `YT-Temporal`          | 31,627 |

## Dataset Structure

### HuggingFace Login (Optional)

```python
# OR run huggingface-cli login
from huggingface_hub import login

hf_token = "hf_xxx"  # TODO: set a valid HuggingFace access token for loading datasets/models
login(token=hf_token)
```

### Data Loading

```python
from datasets import load_dataset

ds_name = "youcook2"  # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
```

### Data Splits

```python
from datasets import load_dataset

ds_name = "youcook2"  # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
train_set = dataset["train"]
```

### Data Instances

```python
from datasets import load_dataset

ds_name = "youcook2"  # change the dataset name here
dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name)
train_set = dataset["train"]

for train_instance in train_set:
    question = train_instance["question"]  # str
    answer = train_instance["answer"]  # str
    video_path = train_instance["video_path"]  # str
```

### Data Fields

```python
import datasets

features = datasets.Features(
    {
        "video_path": datasets.Value("string"),
        "question": datasets.Value("string"),
        "answer": datasets.Value("string"),
    }
)
```

## Dataset Creation

### Curation Rationale

[More Information Needed]

### Source Data

| Task                          | Dataset [Citation]         | Source                                                                             |
|-------------------------------|----------------------------|------------------------------------------------------------------------------------|
| Dense Video Captioning        | `ActivityNet Captions` [1] | [Source](http://activity-net.org/download.html)                                    |
|                               | `ViTT` [2]                 | [Source](https://github.com/google-research-datasets/Video-Timeline-Tags-ViTT)     |
|                               | `YouCook2` [3]             | [Source](http://youcook2.eecs.umich.edu/)                                          |
| Temporal Video Grounding      | `DiDeMo` [4]               | [Source](https://github.com/LisaAnne/LocalizingMoments?tab=readme-ov-file#dataset) |
|                               | `QuerYD` [5]               | [Source](https://www.robots.ox.ac.uk/~vgg/data/queryd/)                            |
|                               | `HiREST_grounding` [6]     | [Source](https://github.com/j-min/HiREST)                                          |
|                               | `Charades-STA` [7]         | [Source](https://github.com/jiyanggao/TALL)                                        |
| Video Summarization           | `TVSum` [8]                | [Source](https://github.com/yalesong/tvsum)                                        |
|                               | `SumMe` [9]                | [Source](http://classif.ai/dataset/ethz-cvl-video-summe/)                          |
| Video Highlight Detection     | `QVHighlights` [10]        | [Source](https://github.com/jayleicn/moment_detr/tree/main/data)                   |
| Step Localization             | `COIN` [11]                | [Source](https://github.com/coin-dataset/annotations)                              |
|                               | `HiREST_step` [6]          | [Source](https://github.com/j-min/HiREST)                                          |
| Transcribed Speech Generation | `YT-Temporal` [12]         | [Source](https://rowanzellers.com/merlot/#data)                                    |

### Annotations

#### Annotation process

To build high-quality multimodal instruction datasets, 
we rewrite various datasets into multimodal-to-text dialog format. 
The annotation process includes four steps:

- (1) **Stage I: Instruction Writing**: writing instructions for each task;
- (2) **Stage II: Data Format Unification**: structuring images and texts into a unified schema;
- (3) **Stage III: Quality Check**: checking the overall dataset quality;
- (4) **Stage IV: Key Datasets Translation**: building multilingual sets. 

#### Who are the annotators?

Three authors of this work are employed as human annotators, 
each of whom is a graduate student familiar with relevant literature.


## Additional Information

### Licensing Information

The content of original dataset follows their original license. 
We suggest that for the task with Unknown/Custom license, the user can check the original project or contact the dataset owner for detailed license information.

Our annotated instruction data is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).


### Citation Information
```bibtex
@article{Ren2023TimeChat,
  title={TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding},
  author={Shuhuai Ren and Linli Yao and Shicheng Li and Xu Sun and Lu Hou},
  journal={ArXiv},
  year={2023},
  volume={abs/2312.02051},
}
```
### Contributions

TimeIT is a video-centric instruction-tuning dataset involving timestamps.
designed to enable the development of general-purpose video agents.

## References

- [1] Dense-Captioning Events in Videos
- [2] Multimodal Pretraining for Dense Video Captioning
- [3] Towards Automatic Learning of Procedures from Web Instructional Videos
- [4] Localizing Moments in Video with Natural Language
- [5] QuerYD: A video dataset with high-quality text and audio narrations
- [6] Hierarchical Video-Moment Retrieval and Step-Captioning
- [7] TALL: Temporal Activity Localization via Language Query
- [8] TVSum: Summarizing Web Videos Using Titles
- [9] Creating Summaries from User Videos
- [10] QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries
- [11] COIN: A Large-scale Dataset for Comprehensive Instructional Video Analysis
- [12] MERLOT: Multimodal Neural Script Knowledge Models