Datasets:
Merge branch 'main' of hf.co:datasets/chengyenhsieh/TAO-Amodal
Browse files
README.md
ADDED
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# TAO-Amodal Dataset
|
2 |
+
|
3 |
+
<!-- Provide a quick summary of the dataset. -->
|
4 |
+
Official Source for Downloading the TAO-Amodal Dataset.
|
5 |
+
|
6 |
+
[**π Project Page**](https://tao-amodal.github.io/) | [**π» Code**](https://github.com/WesleyHsieh0806/TAO-Amodal) | [**π Paper Link**](https://arxiv.org/abs/2312.12433) | [**βοΈ Citations**](#citations)
|
7 |
+
|
8 |
+
<div align="center">
|
9 |
+
<a href="https://tao-amodal.github.io/"><img width="95%" alt="TAO-Amodal" src="https://tao-amodal.github.io/static/images/webpage_preview.png"></a>
|
10 |
+
</div>
|
11 |
+
|
12 |
+
</br>
|
13 |
+
|
14 |
+
Contact: [ππ»ββοΈCheng-Yen (Wesley) Hsieh](https://wesleyhsieh0806.github.io/)
|
15 |
+
|
16 |
+
## Dataset Description
|
17 |
+
Our dataset augments the TAO dataset with amodal bounding box annotations for fully invisible, out-of-frame, and occluded objects.
|
18 |
+
Note that this implies TAO-Amodal also includes modal segmentation masks (as visualized in the color overlays above).
|
19 |
+
Our dataset encompasses 880 categories, aimed at assessing the occlusion reasoning capabilities of current trackers
|
20 |
+
through the paradigm of Tracking Any Object with Amodal perception (TAO-Amodal).
|
21 |
+
|
22 |
+
### Dataset Download
|
23 |
+
1. Download all the annotations.
|
24 |
+
```bash
|
25 |
+
git lfs install
|
26 |
+
git clone [email protected]:datasets/chengyenhsieh/TAO-Amodal
|
27 |
+
```
|
28 |
+
|
29 |
+
2. Download all the video frames:
|
30 |
+
|
31 |
+
You can either download the frames following the instructions [here](https://motchallenge.net/tao_download.php) (recommended) or modify our provided [script](./download_TAO.sh) and run
|
32 |
+
```bash
|
33 |
+
bash download_TAO.sh
|
34 |
+
```
|
35 |
+
|
36 |
+
|
37 |
+
|
38 |
+
|
39 |
+
## π Dataset Structure
|
40 |
+
|
41 |
+
The dataset should be structured like this:
|
42 |
+
```bash
|
43 |
+
βββ frames
|
44 |
+
βββ train
|
45 |
+
βββ ArgoVerse
|
46 |
+
βββ BDD
|
47 |
+
βββ Charades
|
48 |
+
βββ HACS
|
49 |
+
βββ LaSOT
|
50 |
+
βββ YFCC100M
|
51 |
+
βββ amodal_annotations
|
52 |
+
βββ train/validation/test.json
|
53 |
+
βββ train_lvis_v1.json
|
54 |
+
βββ validation_lvis_v1.json
|
55 |
+
βββ example_output
|
56 |
+
βββ prediction.json
|
57 |
+
βββ BURST_annotations
|
58 |
+
βββ train
|
59 |
+
βββ train_visibility.json
|
60 |
+
|
61 |
+
```
|
62 |
+
|
63 |
+
## π File Descriptions
|
64 |
+
|
65 |
+
| File Name | Description |
|
66 |
+
| ------------------ | ---------------------------------- |
|
67 |
+
| train/validation/test.json | Formal annotation files. We use these annotations for visualization. Categories include those in [lvis](https://www.lvisdataset.org/) v0.5 and freeform categories. |
|
68 |
+
| train_lvis_v1.json | We use this file to train our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander), treating each image frame as an independent sequence. Categories are aligned with those in lvis v1.0. |
|
69 |
+
| validation_lvis_v1.json | We use this file to evaluate our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander). Categories are aligned with those in lvis v1.0. |
|
70 |
+
| prediction.json | Example output json from amodal-expander. Tracker predictions should be structured like this file to be evaluated with our [evaluation toolkit](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#bar_chart-evaluation). |
|
71 |
+
| BURST_annotations/XXX.json | Modal mask annotations from [BURST dataset](https://github.com/Ali2500/BURST-benchmark) with our heuristic visibility attributes. We provide these files for the convenience of visualization |
|
72 |
+
|
73 |
+
### Annotation and Prediction Format
|
74 |
+
|
75 |
+
Our annotations are structured similarly as [TAO](https://github.com/TAO-Dataset/annotations) with some modifications.
|
76 |
+
Annotations:
|
77 |
+
```bash
|
78 |
+
|
79 |
+
Annotation file format:
|
80 |
+
{
|
81 |
+
"info" : info,
|
82 |
+
"images" : [image],
|
83 |
+
"videos": [video],
|
84 |
+
"tracks": [track],
|
85 |
+
"annotations" : [annotation],
|
86 |
+
"categories": [category],
|
87 |
+
"licenses" : [license],
|
88 |
+
}
|
89 |
+
annotation: {
|
90 |
+
"id": int,
|
91 |
+
"image_id": int,
|
92 |
+
"track_id": int,
|
93 |
+
"bbox": [x,y,width,height],
|
94 |
+
"area": float,
|
95 |
+
|
96 |
+
# Redundant field for compatibility with COCO scripts
|
97 |
+
"category_id": int,
|
98 |
+
"video_id": int,
|
99 |
+
|
100 |
+
# Other important attributes for evaluation on TAO-Amodal
|
101 |
+
"amodal_bbox": [x,y,width,height],
|
102 |
+
"amodal_is_uncertain": bool,
|
103 |
+
"visibility": float, (0.~1.0)
|
104 |
+
}
|
105 |
+
image, info, video, track, category, licenses, : Same as TAO
|
106 |
+
```
|
107 |
+
|
108 |
+
Predictions should be structured as:
|
109 |
+
|
110 |
+
```bash
|
111 |
+
[{
|
112 |
+
"image_id" : int,
|
113 |
+
"category_id" : int,
|
114 |
+
"bbox" : [x,y,width,height],
|
115 |
+
"score" : float,
|
116 |
+
"track_id": int,
|
117 |
+
"video_id": int
|
118 |
+
}]
|
119 |
+
```
|
120 |
+
Refer to the instructions of [TAO dataset](https://github.com/TAO-Dataset/tao/blob/master/docs/evaluation.md) for further details
|
121 |
+
|
122 |
+
## πΊ Example Sequences
|
123 |
+
Check [here](https://tao-amodal.github.io/#TAO-Amodal) for more examples and [here](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#artist-visualization) for visualization code.
|
124 |
+
[<img src="https://tao-amodal.github.io/static/images/car_and_bus.png" width="50%">](https://tao-amodal.github.io/dataset.html "tao-amodal")
|
125 |
+
|
126 |
+
|
127 |
+
|
128 |
+
## Citation
|
129 |
+
|
130 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
131 |
+
```
|
132 |
+
@misc{hsieh2023tracking,
|
133 |
+
title={Tracking Any Object Amodally},
|
134 |
+
author={Cheng-Yen Hsieh and Tarasha Khurana and Achal Dave and Deva Ramanan},
|
135 |
+
year={2023},
|
136 |
+
eprint={2312.12433},
|
137 |
+
archivePrefix={arXiv},
|
138 |
+
primaryClass={cs.CV}
|
139 |
+
}
|
140 |
+
```
|
141 |
+
|
142 |
+
---
|
143 |
+
task_categories:
|
144 |
+
- object-detection
|
145 |
+
- multi-object-tracking
|
146 |
+
license: mit
|
147 |
+
---
|