Datasets:
Expose a Parquet version so that I can do queries directly without downloading the dataset locally
The current version is parquet - does that work?
The datasets server does not process this dataset for now due to its size. The dataset only has one split, so we would have one big shared parquet file, instead of one parquet file per language. @lhoestq : we can try to put this dataset on an allow list, hopefully we will be able to process it.
We will try to run it on the datasets server: https://github.com/huggingface/datasets-server/pull/983
We put the dataset on the allow list, but it still cannot be processed because the datasets server does not support converting to parquet for gated datasets when the gate requires filling extra fields...
waiting for https://github.com/huggingface/moon-landing/pull/6153 (internal link)
Update: (internal) https://github.com/huggingface/moon-landing/pull/6481
We now have... JobManagerExceededMaximumDurationError
. Maybe we should release the "zombie" detector for the "datasets allow list". Obviously, this job will run for .... a long time. cc
@albertvillanova
@lhoestq
what do you think?
the dataset is made of paquet files - let me implement the parquet copy to refs/convert/parquet and we'll be good :)
PS: the "zombie" detector detects jobs that are marked as started in the queue but are not running anymore smh. Here JobManagerExceededMaximumDurationError comes from the maximum job duration limit
the dataset is made of paquet files - let me implement the parquet copy to refs/convert/parquet and we'll be good :)
Yes!!!!
Done! Thanks a lot @lhoestq for the improvements you made to support big datasets as this one!
https://huggingface.co/datasets/bigcode/the-stack/viewer/bigcode--the-stack/train?p=1234567
Yay! great job team!