fail to get data from curl

#3
by Jiahaoszu - opened

Hey, thank you for your great work too.

I tried the dowloading code and get strange content like:
curl -X GET "https://datasets-server.huggingface.co/splits?dataset=ctheodoris%2FGenecorpus-30M"

then I got:
{"splits":[{"dataset":"ctheodoris/Genecorpus-30M","config":"ctheodoris--Genecorpus-30M","split":"train"}],"pending":[],"failed":[]}

Is it right?

Dear authors, thank you for your great work too.
I am trying to download the dataset with dataset = load_dataset("ctheodoris/Genecorpus-30M"), but occur the following issue,

image.png
Could you also help solve this problem?

Thank you for your interest in Genecorpus-30M. You can use wget to download the dataset components. You can get the link by right-clicking the down arrow next to the dataset file size in the repository. For example:

wget "https://huggingface.co/datasets/ctheodoris/Genecorpus-30M/resolve/main/genecorpus_30M_2048.dataset/dataset.arrow"
wget "https://huggingface.co/datasets/ctheodoris/Genecorpus-30M/resolve/main/genecorpus_30M_2048.dataset/dataset_info.json"
wget "https://huggingface.co/datasets/ctheodoris/Genecorpus-30M/resolve/main/genecorpus_30M_2048.dataset/state.json"

These three files compose the .dataset directory (genecorpus_30M_2048.dataset).

You can then load the dataset from disk as follows:

from datasets import load_from_disk
genecorpus=load_from_disk("/path/to/genecorpus_30M_2048.dataset")

ctheodoris changed discussion status to closed

Hello,

I have tried to access data via the code you provided above: genecorpus=load_from_disk("/path/to/genecorpus_30M_2048.dataset")
However, I have waited rougly 2 hours, but the code was still running. Does it take longer than 2 hours to get the data, or do you think it is issue with the data I have downlaod?

Thank you

Thank you for your question! The first time you load the data it takes a long time, but afterwards it has caches that allow the data to be much more quickly accessed. Because it is quite a large dataset, it could take 15-30min the first time, but 2 hours is much longer than it has taken for us in the past, though this could be affected by the resources you are running it on.

Thank you for very quick response. Do you think it may be better to redownload the dataset?

You could, though if it was corrupted it may more likely cause an error when loading rather than just silently take a long time. In parallel I would leave it loading in case it finishes, because then future loads will be faster.

I have tried using python3 interpreter instead of jupyter notebook, and the code worked. Thank you again for your kind and quick response

@ctheodoris what do you think about adding this to the readme? I got the same error using hf load_dataset method and it would be nice to specify how to download it. Thanks!

Thank you for your interest in Genecorpus-30M. You can use wget to download the dataset components. You can get the link by right-clicking the down arrow next to the dataset file size in the repository. For example:

wget "https://huggingface.co/datasets/ctheodoris/Genecorpus-30M/resolve/main/genecorpus_30M_2048.dataset/dataset.arrow"
wget "https://huggingface.co/datasets/ctheodoris/Genecorpus-30M/resolve/main/genecorpus_30M_2048.dataset/dataset_info.json"
wget "https://huggingface.co/datasets/ctheodoris/Genecorpus-30M/resolve/main/genecorpus_30M_2048.dataset/state.json"

These three files compose the .dataset directory (genecorpus_30M_2048.dataset).

You can then load the dataset from disk as follows:

from datasets import load_from_disk
genecorpus=load_from_disk("/path/to/genecorpus_30M_2048.dataset")

Sign up or log in to comment