Datasets:
loading of multiple shards
@lhoestq could you please review this PR? If things seem good, I will do the following:
- Add a dataset card.
- Contact the dataset authors for further reviews.
Would be great if you could locally do the loading to check things are working as expected.
You can use this Colab Notebook (https://colab.research.google.com/drive/1K3ZU8XUPRDOYD38MQS9nreQXJYitlKSW?usp=sharing) to check how to visualize the dataset.
About how did I generate the multiple sharded archives
- I first downloaded the TAR archive using the URL given here: https://github.com/dwofk/fast-depth#requirements. This is the URL also used in TFDS: https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/datasets/nyu_depth_v2/nyu_depth_v2_dataset_builder.py#L24.
- I then un-tarred the archive and prepared two separate TAR archives for
train
andval
splits.
import tarfile
with tarfile.open("train.tar.gz", "w:gz") as t:
t.add("train")
with tarfile.open("val.tar.gz", "w:gz") as t:
t.add("val")
^ Assuming we're in the nyudepthv2
directory (which is what you get after untarring the original TAR archive).
Then I used the tarproc
utility (https://github.com/tmbdev-archive/tarproc) to create multiple shard archives:
tarsplit train.tar.gz --max-size 3e9 --maxshards 16 -o train
tarsplit val.tar.gz --maxshards 4 -o val
Awesome ! Thanks for the tip on tarproc
:)
Is it ok to require the user to install h5py to load the dataset ? We may need to add it to the viewer @severo (it's an important one for loading vision datasets so I think it's fine)
Regarding the code, you can use relative paths here:
_URLS = {
"train": [f"data/train-{i:06d}.tar" for i in range(12)],
"val": [f"data/val-{i:06d}.tar" for i in range(2)],
}
Other than that it looks all good to me ! :D
Thanks for reviewing!
Is it ok to require the user to install h5py to load the dataset ? We may need to add it to the viewer @severo (it's an important one for loading vision datasets so I think it's fine)
I am on the same page i.e., no harm in having h5py
as a requirement.
Regarding the code, you can use relative paths here:
Just pushed the changes. Keeps the code cleaner.
I am currently generating the info file with dataset-cli
. After that's in, I guess we're good to merge?
Yup ! Just double check the generated info to make sure you have the correct number of examples in each split :)
h5py
has always been installed as a dependency.
You're the best <3
Yup ! Just double check the generated info to make sure you have the correct number of examples in each split :)
@lhoestq just added the modified README and it looks okay to me.
How do I generate the INFO json file? Like this one: https://huggingface.co/datasets/scene_parse_150/blob/main/dataset_infos.json
You're all set :)
You no longer need the JSON file, it has redundant info as the readme. Feel free to merge if it's good for you
Alrighty, captain! Merging away.