Update README.md
Browse files
README.md
CHANGED
@@ -10,19 +10,23 @@ tags:
|
|
10 |
- time-series
|
11 |
---
|
12 |
|
13 |
-
#
|
14 |
|
15 |
<p align="center" width="100%">
|
16 |
<img src="ttm_image.webp" width="600">
|
17 |
</p>
|
18 |
|
19 |
TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research.
|
20 |
-
**With
|
21 |
|
22 |
|
23 |
TTM is accepted in NeurIPS 2024.
|
24 |
|
25 |
-
**TTM-R2 comprises TTM variants pre-trained on larger pretraining datasets.**
|
|
|
|
|
|
|
|
|
26 |
|
27 |
|
28 |
TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
|
|
|
10 |
- time-series
|
11 |
---
|
12 |
|
13 |
+
# Granite-TimeSeries-TTM-R2 Model Card
|
14 |
|
15 |
<p align="center" width="100%">
|
16 |
<img src="ttm_image.webp" width="600">
|
17 |
</p>
|
18 |
|
19 |
TinyTimeMixers (TTMs) are compact pre-trained models for Multivariate Time-Series Forecasting, open-sourced by IBM Research.
|
20 |
+
**With model sizes starting from 1M params, TTM introduces the notion of the first-ever “tiny” pre-trained models for Time-Series Forecasting.**
|
21 |
|
22 |
|
23 |
TTM is accepted in NeurIPS 2024.
|
24 |
|
25 |
+
**TTM-R2 comprises TTM variants pre-trained on larger pretraining datasets (~700M samples).** We have another set of TTM models released under `TTM-R1` trained on ~250M samples
|
26 |
+
which can be accessed from [here](https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1) In general, `TTM-R2` models perform better than `TTM-R1` models as they are
|
27 |
+
trained on larger pretraining dataset. However, the choice of R1 vs R2 depends on your target data distribution. Hence requesting users to try both
|
28 |
+
R1 and R2 variants and pick the best for your data.
|
29 |
+
|
30 |
|
31 |
|
32 |
TTM outperforms several popular benchmarks demanding billions of parameters in zero-shot and few-shot forecasting. TTMs are lightweight
|