The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 159, in compute
                  compute_split_names_from_info_response(
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 131, in compute_split_names_from_info_response
                  config_info_response = get_previous_step_or_raise(kind="config-info", dataset=dataset, config=config)
                File "/src/libs/libcommon/src/libcommon/simple_cache.py", line 567, in get_previous_step_or_raise
                  raise CachedArtifactError(
              libcommon.simple_cache.CachedArtifactError: The previous step failed.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 499, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 99, in _split_generators
                  inferred_arrow_schema = pa.concat_tables(pa_tables, promote_options="default").schema
                File "pyarrow/table.pxi", line 5317, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: Unable to merge: Field npz has incompatible types: struct<inputs: list<item: list<item: list<item: list<item: list<item: double>>>>>, masks: list<item: list<item: list<item: double>>>, targets: list<item: list<item: list<item: list<item: list<item: double>>>>>> vs struct<inputs: list<item: list<item: list<item: list<item: list<item: float>>>>>, masks: list<item: list<item: list<item: double>>>, targets: list<item: list<item: list<item: list<item: list<item: float>>>>>>: Unable to merge: Field inputs has incompatible types: list<item: list<item: list<item: list<item: list<item: double>>>>> vs list<item: list<item: list<item: list<item: list<item: float>>>>>: Unable to merge: Field item has incompatible types: list<item: list<item: list<item: list<item: double>>>> vs list<item: list<item: list<item: list<item: float>>>>: Unable to merge: Field item has incompatible types: list<item: list<item: list<item: double>>> vs list<item: list<item: list<item: float>>>: Unable to merge: Field item has incompatible types: list<item: list<item: double>> vs list<item: list<item: float>>: Unable to merge: Field item has incompatible types: list<item: double> vs list<item: float>: Unable to merge: Field item has incompatible types: double vs float
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 75, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 572, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 504, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

HR-Extreme Dataset

Overview

HR-Extreme is a high-resolution dataset designed to evaluate the performance of state-of-the-art models in predicting extreme weather events. The dataset contains 17 types of extreme weather events from 2020, based on High-Resolution Rapid Refresh (HRRR) data. It is intended for researchers in weather forecasting, encompassing both physical and deep learning methods. [Github Link](github_link: https://github.com/HuskyNian/HR-Extreme)

Dataset Structure

The dataset is divided into two main folders:

  • 202001_202006: Contains data from January 2020 to June 2020.
  • 202007_202012: Contains data from July 2020 to December 2020.

Each folder stores the dataset in the WebDataset format, following Hugging Face's recommendations. Every 10 .npz files are aggregated into a single .tar file, named sequentially as i.tar (e.g., 0001.tar).

Usage

To construct the dataset, use the provided scripts in the GitHub repository. The main script, make_datasetall.py, generates an index file for the dataset:

python make_datasetall.py 20200101 20200630
Downloads last month
2