Datasets:
doyensahoo
commited on
Commit
โข
5d4e7e4
1
Parent(s):
21224de
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
task_categories:
|
4 |
+
- time-series-forecasting
|
5 |
+
tags:
|
6 |
+
- timeseries
|
7 |
+
- forecasting
|
8 |
+
- benchmark
|
9 |
+
- gifteval
|
10 |
+
size_categories:
|
11 |
+
- 1M<n<10M
|
12 |
+
---
|
13 |
+
# GIFT-Eval Pre-training Datasets
|
14 |
+
|
15 |
+
Pretraining dataset aligned with [GIFT-Eval](https://huggingface.co/datasets/Salesforce/GiftEval) that has 71 univariate and 17 multivariate datasets, spanning seven domains and 13 frequencies, totaling 4.5 million time series and 230 billion data points. Notably this collection of data has no leakage issue with the train/test split and can be used to pretrain foundation models that can be fairly evaluated on GIFT-Eval.
|
16 |
+
|
17 |
+
[๐ Paper](https://arxiv.org/abs/2410.10393)
|
18 |
+
|
19 |
+
[๐ฅ๏ธ Code](https://github.com/SalesforceAIResearch/gift-eval)
|
20 |
+
|
21 |
+
[๐ Blog Post]()
|
22 |
+
|
23 |
+
[๐๏ธ Leader Board](https://huggingface.co/spaces/Salesforce/GIFT-Eval)
|
24 |
+
|
25 |
+
## Citation
|
26 |
+
|
27 |
+
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
|
28 |
+
|
29 |
+
|
30 |
+
If you find this benchmark useful, please consider citing:
|
31 |
+
```
|
32 |
+
@article{aksu2024giftevalbenchmarkgeneraltime,
|
33 |
+
title={GIFT-Eval: A Benchmark For General Time Series Forecasting Model Evaluation},
|
34 |
+
author={Taha Aksu and Gerald Woo and Juncheng Liu and Xu Liu and Chenghao Liu and Silvio Savarese and Caiming Xiong and Doyen Sahoo},
|
35 |
+
journal = {arxiv preprint arxiv:2410.10393},
|
36 |
+
year={2024},
|
37 |
+
```
|