Datasets:
Dataset Viewer issue
Not sure if there's some mistake in organlzing the data. The dataset viewer is not working. I mean it is able to displat the validation and test splits but fails for the train split with the below mentioned error.
Error details:
Error code: StreamingRowsError
Exception: KeyError
Message: 'label'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 584, in compute_first_rows_response
rows = get_rows(
File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 179, in decorator
return func(*args, **kwargs)
File "/src/services/worker/src/worker/job_runners/split/first_rows_from_streaming.py", line 235, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 941, in __iter__
yield _apply_feature_types_on_example(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 698, in _apply_feature_types_on_example
encoded_example = features.encode_example(example)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1809, in encode_example
return encode_nested_example(self, example)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1212, in encode_nested_example
{
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1212, in <dictcomp>
{
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 302, in zip_dict
yield key, tuple(d[key] for d in dicts)
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 302, in <genexpr>
yield key, tuple(d[key] for d in dicts)
KeyError: 'label'
Would appreciate any help/suggesions!
Thanks for reporting, @tanmaykm , I'm having a look at it...
Please note that you have created a dataset that must be loaded using the "imagefolder" builder:
ds = load_dataset("imagefolder", data_dir="/path/to/your/repo/folder")
These kind of datasets are not yet supported by the dataset viewer. We are working on supporting this: see https://github.com/huggingface/datasets/pull/5331
- Maybe @polinaeterna can give you some more details
On the meantime, you can push your dataset to a new repository on the Hub, in Parquet format instead, by using the .push_to_hub
method, as described in our docs: https://huggingface.co/docs/datasets/image_dataset#upload-dataset-to-the-hub
ds = load_dataset("imagefolder", data_dir="/path/to/your/repo/folder")
ds.push_to_hub("tanmaykm/indian-dance-forms-parquet")