Update README.md
Browse files
README.md
CHANGED
@@ -15,17 +15,18 @@ can be easily fine-tuned for your target data. Refer to our [paper](https://arxi
|
|
15 |
|
16 |
## Benchmark Highlights:
|
17 |
|
18 |
-
TTM outperforms pre-trained GPT4TS (NeurIPS 23) by
|
19 |
-
|
20 |
-
TTM outperforms pre-trained
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
TTM
|
25 |
-
|
26 |
-
TTM outperforms
|
27 |
-
|
28 |
-
|
|
|
29 |
|
30 |
## Model Description
|
31 |
|
@@ -42,11 +43,11 @@ only 3-6 hours using 6 A100 GPUs, as opposed to several days or weeks in traditi
|
|
42 |
|
43 |
## Model Releases (along with the branch name where the models are stored):
|
44 |
|
45 |
-
- 512-96: Given the last 512 time-points (i.e. context length), this model can forecast
|
46 |
-
in future. Recommended for hourly and minutely forecasts (Ex. resolutions 5 min, 10 min, 15 min, etc) (branch name: main)
|
47 |
|
48 |
-
- 1024-96: Given the last 1024 time-points (i.e. context length), this model can forecast
|
49 |
-
in future. Recommended for hourly and minutely forecasts (Ex. resolutions 5 min, 10 min, 15 min, etc) (branch name: 1024-96-v1)
|
50 |
|
51 |
- Stay tuned for more models !
|
52 |
|
|
|
15 |
|
16 |
## Benchmark Highlights:
|
17 |
|
18 |
+
- TTM outperforms pre-trained GPT4TS (NeurIPS 23) by 7-12% in few-shot forecasting.
|
19 |
+
- TTM outperforms pre-trained LLMTime (NeurIPS 23) by 24% in zero-shot forecasting.
|
20 |
+
- TTM outperforms pre-trained SimMTM (NeurIPS 23) by 17% in few-shot forecasting.
|
21 |
+
- TTM drastically reduces the compute needs as compared to the LLM-TS pre-training methods, with a 14X cut in
|
22 |
+
learnable parameters, 106X less total parameters, and substantial reductions in fine-tuning (65X),
|
23 |
+
inference time (54X), and memory usage (27X).
|
24 |
+
Zero-shot results of TTM often surpass the few-shot results of many SOTA approaches including
|
25 |
+
PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
|
26 |
+
- TTM (1024-96, released in this model card) also outperforms pre-trained MOIRAI on FL = 96 by ...
|
27 |
+
- TTM quick fine-tuning also outperforms the hard statistical baselines (Statistical ensemble and S-Naive) in
|
28 |
+
M4-hourly dataset which pretrained TS models are finding hard to outperform.
|
29 |
+
|
30 |
|
31 |
## Model Description
|
32 |
|
|
|
43 |
|
44 |
## Model Releases (along with the branch name where the models are stored):
|
45 |
|
46 |
+
- 512-96: Given the last 512 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
|
47 |
+
in future. Recommended for hourly and minutely forecasts (Ex. resolutions 5 min, 10 min, 15 min, 1 hour, etc) (branch name: main)
|
48 |
|
49 |
+
- 1024-96: Given the last 1024 time-points (i.e. context length), this model can forecast up to next 96 time-points (i.e. forecast length)
|
50 |
+
in future. Recommended for hourly and minutely forecasts (Ex. resolutions 5 min, 10 min, 15 min, 1 hour, etc) (branch name: 1024-96-v1)
|
51 |
|
52 |
- Stay tuned for more models !
|
53 |
|