Error Downloading ARC-Easy and ARC-Challenge: NonMatchingSplitsSizesError
I am trying to download the test splits for both ARC-Easy and ARC-Challenge using the load_dataset
method. However, I am running into the following error:
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=349760, num_examples=1119, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=968760, num_examples=3370, shard_lengths=None, dataset_name='parquet')}, {'expected': SplitInfo(name='test', num_bytes=375511, num_examples=1172, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='test', num_bytes=1033025, num_examples=3548, shard_lengths=None, dataset_name='parquet')}, {'expected': SplitInfo(name='validation', num_bytes=96660, num_examples=299, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=254054, num_examples=869, shard_lengths=None, dataset_name='parquet')}]
This was working fine earlier this month, so I suspect the most recent change introduced this error. Let me know if I can be of any more help.
Hi @Bigwolfden.
I cannot reproduce your issue:
ds = load_dataset("ai2_arc", "ARC-Challenge")
ds
DatasetDict({
train: Dataset({
features: ['id', 'question', 'choices', 'answerKey'],
num_rows: 1119
})
test: Dataset({
features: ['id', 'question', 'choices', 'answerKey'],
num_rows: 1172
})
validation: Dataset({
features: ['id', 'question', 'choices', 'answerKey'],
num_rows: 299
})
})
Maybe you could try to refresh your cache by passing:
ds = load_dataset("ai2_arc", "ARC-Challenge", download_mode="force_redownload")
@albertvillanova , I am also seeing the same error. It seems like it is due to the difference in expected and recorded splits
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=349760, num_examples=1119, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=968760, num_examples=3370, shard_lengths=None, dataset_name='parquet')}, {'expected': SplitInfo(name='test', num_bytes=375511, num_examples=1172, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='test', num_bytes=1033025, num_examples=3548, shard_lengths=None, dataset_name='parquet')}, {'expected': SplitInfo(name='validation', num_bytes=96660, num_examples=299, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=254054, num_examples=869, shard_lengths=None, dataset_name='parquet')}]
I am using datasets version 2.11.0 and I am able to reproduce it even with "force_redownload" option
Is there a way to fall back to pick the older dataset?
same issue, dataset version 2.12.0
@albertvillanova , I am also seeing the same error. It seems like it is due to the difference in expected and recorded splits
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=349760, num_examples=1119, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='train', num_bytes=968760, num_examples=3370, shard_lengths=None, dataset_name='parquet')}, {'expected': SplitInfo(name='test', num_bytes=375511, num_examples=1172, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='test', num_bytes=1033025, num_examples=3548, shard_lengths=None, dataset_name='parquet')}, {'expected': SplitInfo(name='validation', num_bytes=96660, num_examples=299, shard_lengths=None, dataset_name=None), 'recorded': SplitInfo(name='validation', num_bytes=254054, num_examples=869, shard_lengths=None, dataset_name='parquet')}]
I am using datasets version 2.11.0 and I am able to reproduce it even with "force_redownload" option
Is there a way to fall back to pick the older dataset?
Hi @gokulr-cb and @zihengg .
You need to update your datasets
library:
pip install -U datasets
Due to security reasons, we are disabling all the datasets containing a Python script.