The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Error code: FileFormatMismatchBetweenSplitsError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
ALFRED Dataset for ABP
We provide the ALFRED dataset used for ABP including ResNet-18 features of egocentric and surrounding views, annotations, etc. The surrdounding views are from four navigable actions defined in ALFRED: RotateLeft (90°), LookUp(15°), LookDown(15°), and RotateRight(90°). The file structure is almost identical to the ALFRED dataset, so refer to ALFRED for more details.
Download the dataset
Raw RGB images with depth and object masks
Move to the root (denoted by ALFRED_ROOT below) of the ABP (or related work) repo and clone this repository by following the commands below.
git clone https://huggingface.co/datasets/byeonghwikim/abp_images json_2.1.0
Pre-extracted features
To skip feature extraction from RGB images, you can use our pre-extracted features.
Move to the root (denoted by ALFRED_ROOT below) of the ABP (or related work) repo and clone this repository by following the commands below.
Note: This dataset is quite large (~1.6T).
cd $ALFRED_ROOT/data
git clone https://huggingface.co/datasets/byeonghwikim/abp_dataset json_feat_2.1.0
After downloading the dataset, you may directly load a surrounding feature and the expected outcome is as below.
>> import torch
>> filename = 'train/look_at_obj_in_light-AlarmClock-None-DeskLamp-301/trial_T20190907_174127_043461/feat_conv_panoramic.pt'
>> im = torch.load(filename) # [5, T, 512, 7, 7], T the length of a trajectory
>> im.shape
torch.Size([5, T, 512, 7, 7])
The 0-dimension of the feature corresponds to the respective view directions as below.
- 0: left view (RotateLeft)
- 1: up view (LookUp)
- 2: front (egocentric) view (no action)
- 3: down view (LookDown)
- 4: right view (RotateRight)
Inspired by MOCA, we apply image augmentation to the agent's visual observation. We apply two types of image augmentation: 1) swapping color channels of images and 2) AutoAugment.
- No augmentation: (feat_conv_panoramic.pt)
- Swapping color channels: (feat_conv_colorSwap1_panoramic.pt, feat_conv_colorSwap2_panoramic.pt)
- AutoAugment: (feat_conv_onlyAutoAug1_panoramic.pt ~ feat_conv_onlyAutoAug4_panoramic.pt)
Related work that uses this dataset
-
Online Continual Learning for Interactive Instruction Following Agents
Byeonghwi Kim *, Minhyuk Seo *, Jonghyun Choi
ICLR 2024 -
Multi-Level Compositional Reasoning for Interactive Instruction Following
Suvaansh Bhambri *, Byeonghwi Kim *, Jonghyun Choi
AAAI 2023 (Oral) -
Factorizing Perception and Policy for Interactive Instruction Following
Kunal Pratap Singh *, Suvaansh Bhambri *, Byeonghwi Kim *, Roozbeh Mottaghi , Jonghyun Choi .
ICCV 2021 -
Agent with the Big Picture: Perceiving Surroundings for Interactive Instruction Following
Byeonghwi Kim , Suvaansh Bhambri , Kunal Pratap Singh , Roozbeh Mottaghi , Jonghyun Choi .
Embodied AI Workshop @ CVPR 2021
Citation
If you find this repository useful, please cite this repository.
@inproceedings{kim2021agent,
author = {Kim, Byeonghwi and Bhambri, Suvaansh and Singh, Kunal Pratap and Mottaghi, Roozbeh and Choi, Jonghyun},
title = {Agent with the Big Picture: Perceiving Surroundings for Interactive Instruction Following},
booktitle = {Embodied AI Workshop @ CVPR 2021},
year = {2021},
}
- Downloads last month
- 62