Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Size:
< 1K
Libraries:
Datasets
Dask
License:
Search is not available for this dataset
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
reference_datetime: string
site_id: string
datetime: timestamp[us, tz=UTC]
family: string
variable: string
observation: double
crps: double
logs: double
mean: double
median: double
sd: double
quantile97.5: double
quantile02.5: double
quantile90: double
quantile10: double
to
{'reference_datetime': Value(dtype='string', id=None), 'site_id': Value(dtype='string', id=None), 'datetime': Value(dtype='timestamp[us, tz=UTC]', id=None), 'family': Value(dtype='string', id=None), 'variable': Value(dtype='string', id=None), 'pubDate': Value(dtype='string', id=None), 'observation': Value(dtype='float64', id=None), 'crps': Value(dtype='float64', id=None), 'logs': Value(dtype='float64', id=None), 'mean': Value(dtype='float64', id=None), 'median': Value(dtype='float64', id=None), 'sd': Value(dtype='float64', id=None), 'quantile97.5': Value(dtype='float64', id=None), 'quantile02.5': Value(dtype='float64', id=None), 'quantile90': Value(dtype='float64', id=None), 'quantile10': Value(dtype='float64', id=None)}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 263, in query
                  pa_table = pa.concat_tables(
                File "pyarrow/table.pxi", line 5245, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 5 was different: 
              reference_datetime: string
              site_id: string
              datetime: timestamp[us, tz=UTC]
              family: string
              variable: string
              pubDate: string
              observation: double
              crps: double
              logs: double
              mean: double
              median: double
              sd: double
              quantile97.5: double
              quantile02.5: double
              quantile90: double
              quantile10: double
              vs
              reference_datetime: string
              site_id: string
              datetime: timestamp[us, tz=UTC]
              family: string
              variable: string
              observation: double
              crps: double
              logs: double
              mean: double
              median: double
              sd: double
              quantile97.5: double
              quantile02.5: double
              quantile90: double
              quantile10: double
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 101, in get_rows_content
                  pa_table = rows_index.query(offset=0, length=rows_max_number)
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 412, in query
                  return self.parquet_index.query(offset=offset, length=length)
                File "/src/libs/libcommon/src/libcommon/parquet_utils.py", line 270, in query
                  raise SchemaMismatchError("Parquet files have different schema.", err)
              libcommon.parquet_utils.SchemaMismatchError: ('Parquet files have different schema.', ArrowInvalid('Schema at index 5 was different: \nreference_datetime: string\nsite_id: string\ndatetime: timestamp[us, tz=UTC]\nfamily: string\nvariable: string\npubDate: string\nobservation: double\ncrps: double\nlogs: double\nmean: double\nmedian: double\nsd: double\nquantile97.5: double\nquantile02.5: double\nquantile90: double\nquantile10: double\nvs\nreference_datetime: string\nsite_id: string\ndatetime: timestamp[us, tz=UTC]\nfamily: string\nvariable: string\nobservation: double\ncrps: double\nlogs: double\nmean: double\nmedian: double\nsd: double\nquantile97.5: double\nquantile02.5: double\nquantile90: double\nquantile10: double'))
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 322, in compute
                  compute_first_rows_from_parquet_response(
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 113, in compute_first_rows_from_parquet_response
                  return create_first_rows_response(
                File "/src/libs/libcommon/src/libcommon/viewer_utils/rows.py", line 134, in create_first_rows_response
                  rows_content = get_rows_content(rows_max_number)
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 108, in get_rows_content
                  raise SplitParquetSchemaMismatchError(
              libcommon.exceptions.SplitParquetSchemaMismatchError: Split parquet files being processed have different schemas. Ensure all files have identical column names.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 126, in get_rows_or_raise
                  return get_rows(
                File "/src/services/worker/src/worker/utils.py", line 64, in decorator
                  return func(*args, **kwargs)
                File "/src/services/worker/src/worker/utils.py", line 103, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1388, in __iter__
                  for key, example in ex_iterable:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 282, in __iter__
                  for key, pa_table in self.generate_tables_fn(**self.kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 94, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 74, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2194, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              reference_datetime: string
              site_id: string
              datetime: timestamp[us, tz=UTC]
              family: string
              variable: string
              observation: double
              crps: double
              logs: double
              mean: double
              median: double
              sd: double
              quantile97.5: double
              quantile02.5: double
              quantile90: double
              quantile10: double
              to
              {'reference_datetime': Value(dtype='string', id=None), 'site_id': Value(dtype='string', id=None), 'datetime': Value(dtype='timestamp[us, tz=UTC]', id=None), 'family': Value(dtype='string', id=None), 'variable': Value(dtype='string', id=None), 'pubDate': Value(dtype='string', id=None), 'observation': Value(dtype='float64', id=None), 'crps': Value(dtype='float64', id=None), 'logs': Value(dtype='float64', id=None), 'mean': Value(dtype='float64', id=None), 'median': Value(dtype='float64', id=None), 'sd': Value(dtype='float64', id=None), 'quantile97.5': Value(dtype='float64', id=None), 'quantile02.5': Value(dtype='float64', id=None), 'quantile90': Value(dtype='float64', id=None), 'quantile10': Value(dtype='float64', id=None)}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Snapshot of the Ecological Forecasting Initiative NEON Forecasting Challenge

Includes probabilistic forecasts, observations, and skill scores across all submitted forecasts over 5 challenge themes.

Downloads last month
568
Edit dataset card