Datasets:
TAO-Amodal Dataset
Official Source for Downloading the TAO-Amodal Dataset.
π Project Page | π» Code | π Paper Link | βοΈ Citations
Contact: ππ»ββοΈCheng-Yen (Wesley) Hsieh
Dataset Description
Our dataset augments the TAO dataset with amodal bounding box annotations for fully invisible, out-of-frame, and occluded objects. Note that this implies TAO-Amodal also includes modal segmentation masks (as visualized in the color overlays above). Our dataset encompasses 880 categories, aimed at assessing the occlusion reasoning capabilities of current trackers through the paradigm of Tracking Any Object with Amodal perception (TAO-Amodal).
Dataset Download
- Download all the annotations.
git lfs install
git clone [email protected]:datasets/chengyenhsieh/TAO-Amodal
- Download all the video frames:
You can either download the frames following the instructions here (recommended) or modify our provided script and run
bash download_TAO.sh
π Dataset Structure
The dataset should be structured like this:
βββ frames
βββ train
βββ ArgoVerse
βββ BDD
βββ Charades
βββ HACS
βββ LaSOT
βββ YFCC100M
βββ amodal_annotations
βββ train/validation/test.json
βββ train_lvis_v1.json
βββ validation_lvis_v1.json
βββ example_output
βββ prediction.json
βββ BURST_annotations
βββ train
βββ train_visibility.json
π File Descriptions
File Name | Description |
---|---|
train/validation/test.json | Formal annotation files. We use these annotations for visualization. Categories include those in lvis v0.5 and freeform categories. |
train_lvis_v1.json | We use this file to train our amodal-expander, treating each image frame as an independent sequence. Categories are aligned with those in lvis v1.0. |
validation_lvis_v1.json | We use this file to evaluate our amodal-expander. Categories are aligned with those in lvis v1.0. |
prediction.json | Example output json from amodal-expander. Tracker predictions should be structured like this file to be evaluated with our evaluation toolkit. |
BURST_annotations/XXX.json | Modal mask annotations from BURST dataset with our heuristic visibility attributes. We provide these files for the convenience of visualization |
Annotation and Prediction Format
Our annotations are structured similarly as TAO with some modifications. Annotations:
Annotation file format:
{
"info" : info,
"images" : [image],
"videos": [video],
"tracks": [track],
"annotations" : [annotation],
"categories": [category],
"licenses" : [license],
}
annotation: {
"id": int,
"image_id": int,
"track_id": int,
"bbox": [x,y,width,height],
"area": float,
# Redundant field for compatibility with COCO scripts
"category_id": int,
"video_id": int,
# Other important attributes for evaluation on TAO-Amodal
"amodal_bbox": [x,y,width,height],
"amodal_is_uncertain": bool,
"visibility": float, (0.~1.0)
}
image, info, video, track, category, licenses, : Same as TAO
Predictions should be structured as:
[{
"image_id" : int,
"category_id" : int,
"bbox" : [x,y,width,height],
"score" : float,
"track_id": int,
"video_id": int
}]
Refer to the instructions of TAO dataset for further details
πΊ Example Sequences
Check here for more examples and here for visualization code.
Citation
@misc{hsieh2023tracking,
title={Tracking Any Object Amodally},
author={Cheng-Yen Hsieh and Tarasha Khurana and Achal Dave and Deva Ramanan},
year={2023},
eprint={2312.12433},
archivePrefix={arXiv},
primaryClass={cs.CV}
}