Datasets:

Languages:
English
ArXiv:

Trouble with streaming

#5
by andersonbcdefg - opened

Hey, when I try the following with this dataset, it hangs for a really long time. Works fine with similarly-large datasets like The Pile, RefinedWeb, etc.

import datasets
slim = datasets.load_dataset( 'cerebras/SlimPajama-627B', split="train", streaming=True)

Any reason streaming isn't supported? Can this be fixed?

@andersonbcdefg I recommend using git lfs to download the dataset, checking what is wrong with streaming in the meantime.

Hi ! I'm Quentin from HF

We're working on fixing this soon (the current implementation is inefficient for datasets with lots of files).
I'll keep you informed :)

@lhoestq any update? it seems like currently it has to download the metadata for every file in the dataset, even when streaming.

We'll do a new release of datasets today or tomorrow. It brings down the loading time to ~2min (time to list all the 100k+ files of the dataset)

is it this PR? https://github.com/huggingface/datasets/pull/6493

If so, ds = load_dataset("cerebras/SlimPajama-627B", streaming=True, trust_remote_code=False), still is really slow as it still attempts to load the metadata for every file in each chunk.

You also need the latest version of huhuggingface_hub to get the speed up:

pip install -U huggingface_hub

Still takes 10 minutes to load the streaming dataset in huggingface_hub version 0.25.1.

Sign up or log in to comment