File size: 5,416 Bytes
a6f5cd1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
# TAO-Amodal Dataset

<!-- Provide a quick summary of the dataset. -->
 Official Source for Downloading the TAO-Amodal Dataset.
   
   [**πŸ“™ Project Page**](https://tao-amodal.github.io/)  | [**πŸ’» Code**](https://github.com/WesleyHsieh0806/TAO-Amodal) | [**πŸ“Ž Paper Link**](https://arxiv.org/abs/2312.12433) | [**✏️ Citations**](#citations)
   
   <div align="center">
  <a href="https://tao-amodal.github.io/"><img width="95%" alt="TAO-Amodal" src="https://tao-amodal.github.io/static/images/webpage_preview.png"></a>
   </div>

</br>

Contact: [πŸ™‹πŸ»β€β™‚οΈCheng-Yen (Wesley) Hsieh](https://wesleyhsieh0806.github.io/)

## Dataset Description
Our dataset augments the TAO dataset with amodal bounding box annotations for fully invisible, out-of-frame, and occluded objects. 
Note that this implies TAO-Amodal also includes modal segmentation masks (as visualized in the color overlays above). 
Our dataset encompasses 880 categories, aimed at assessing the occlusion reasoning capabilities of current trackers 
through the paradigm of Tracking Any Object with Amodal perception (TAO-Amodal).

### Dataset Download
1. Download all the annotations.
```bash
git lfs install
git clone [email protected]:datasets/chengyenhsieh/TAO-Amodal
```

2. Download all the video frames:

You can either download the frames following the instructions [here](https://motchallenge.net/tao_download.php) (recommended) or modify our provided [script](./download_TAO.sh) and run
```bash
bash download_TAO.sh
```




## πŸ“š Dataset Structure

The dataset should be structured like this:
```bash
    β”œβ”€β”€ frames
         └── train
            β”œβ”€β”€ ArgoVerse
            β”œβ”€β”€ BDD
            β”œβ”€β”€ Charades
            β”œβ”€β”€ HACS
            β”œβ”€β”€ LaSOT
            └── YFCC100M
    β”œβ”€β”€ amodal_annotations
         β”œβ”€β”€ train/validation/test.json
         β”œβ”€β”€ train_lvis_v1.json
         └── validation_lvis_v1.json
    β”œβ”€β”€ example_output
         └── prediction.json
    └── BURST_annotations
         └── train
              └── train_visibility.json

```

## πŸ“š File Descriptions

| File Name          | Description                        | 
| ------------------ | ---------------------------------- |
| train/validation/test.json    | Formal annotation files. We use these annotations for visualization. Categories include those in [lvis](https://www.lvisdataset.org/) v0.5 and freeform categories. |
| train_lvis_v1.json    | We use this file to train our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander), treating each image frame as an independent sequence. Categories are aligned with those in lvis v1.0. |
| validation_lvis_v1.json    | We use this file to evaluate our [amodal-expander](https://tao-amodal.github.io/index.html#Amodal-Expander). Categories are aligned with those in lvis v1.0. |
| prediction.json | Example output json from amodal-expander. Tracker predictions should be structured like this file to be evaluated with our [evaluation toolkit](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#bar_chart-evaluation). |
| BURST_annotations/XXX.json | Modal mask annotations from [BURST dataset](https://github.com/Ali2500/BURST-benchmark) with our heuristic visibility attributes. We provide these files for the convenience of visualization |

### Annotation and Prediction Format

Our annotations are structured similarly as [TAO](https://github.com/TAO-Dataset/annotations) with some modifications.
Annotations:
```bash

Annotation file format:
{
    "info" : info,
    "images" : [image],
    "videos": [video],
    "tracks": [track],
    "annotations" : [annotation],
    "categories": [category],
    "licenses" : [license],
}
annotation: {
    "id": int,
    "image_id": int,
    "track_id": int,
    "bbox": [x,y,width,height],
    "area": float,

    # Redundant field for compatibility with COCO scripts
    "category_id": int,
    "video_id": int,

    # Other important attributes for evaluation on TAO-Amodal
    "amodal_bbox": [x,y,width,height],
    "amodal_is_uncertain": bool,
    "visibility": float, (0.~1.0)
}
image, info, video, track, category, licenses, : Same as TAO
```

Predictions should be structured as:

```bash
[{
    "image_id" : int,
    "category_id" : int,
    "bbox" : [x,y,width,height],
    "score" : float,
    "track_id": int,
    "video_id": int
}]
```
Refer to the instructions of [TAO dataset](https://github.com/TAO-Dataset/tao/blob/master/docs/evaluation.md) for further details

## πŸ“Ί Example Sequences
Check [here](https://tao-amodal.github.io/#TAO-Amodal) for more examples and [here](https://github.com/WesleyHsieh0806/TAO-Amodal?tab=readme-ov-file#artist-visualization) for visualization code.
[<img src="https://tao-amodal.github.io/static/images/car_and_bus.png" width="50%">](https://tao-amodal.github.io/dataset.html "tao-amodal")



## Citation 

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```
@misc{hsieh2023tracking,
    title={Tracking Any Object Amodally},
    author={Cheng-Yen Hsieh and Tarasha Khurana and Achal Dave and Deva Ramanan},
    year={2023},
    eprint={2312.12433},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
```

---
task_categories:
- object-detection
- multi-object-tracking
license: mit
---