GiftEvalPretrain / README.md
doyensahoo's picture
Upload README.md with huggingface_hub
5d4e7e4 verified
|
raw
history blame
1.39 kB
metadata
license: apache-2.0
task_categories:
  - time-series-forecasting
tags:
  - timeseries
  - forecasting
  - benchmark
  - gifteval
size_categories:
  - 1M<n<10M

GIFT-Eval Pre-training Datasets

Pretraining dataset aligned with GIFT-Eval that has 71 univariate and 17 multivariate datasets, spanning seven domains and 13 frequencies, totaling 4.5 million time series and 230 billion data points. Notably this collection of data has no leakage issue with the train/test split and can be used to pretrain foundation models that can be fairly evaluated on GIFT-Eval.

๐Ÿ“„ Paper

๐Ÿ–ฅ๏ธ Code

๐Ÿ“” Blog Post

๐ŸŽ๏ธ Leader Board

Citation

If you find this benchmark useful, please consider citing:

@article{aksu2024giftevalbenchmarkgeneraltime,
      title={GIFT-Eval: A Benchmark For General Time Series Forecasting Model Evaluation}, 
      author={Taha Aksu and Gerald Woo and Juncheng Liu and Xu Liu and Chenghao Liu and Silvio Savarese and Caiming Xiong and Doyen Sahoo},
      journal = {arxiv preprint arxiv:2410.10393},
      year={2024},