Update README.md
Browse files
README.md
CHANGED
@@ -15,15 +15,15 @@ can be easily fine-tuned for your target data. Refer to our [paper](https://arxi
|
|
15 |
|
16 |
## Benchmark Highlights:
|
17 |
|
18 |
-
- TTM outperforms pre-trained GPT4TS (NeurIPS 23) by 7-12% in few-shot forecasting
|
19 |
-
- TTM outperforms pre-trained LLMTime (NeurIPS 23) by 24% in zero-shot forecasting
|
20 |
-
- TTM outperforms pre-trained SimMTM (NeurIPS 23) by 17% in few-shot forecasting
|
21 |
-
- Zero-shot results of TTM often surpass the few-shot results of many SOTA approaches including
|
22 |
PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
|
23 |
-
- TTM (1024-96, released in this model card) also outperforms pre-trained MOIRAI on FL = 96 by ...
|
24 |
- TTM quick fine-tuning also outperforms the hard statistical baselines (Statistical ensemble and S-Naive) in
|
25 |
M4-hourly dataset which pretrained TS models are finding hard to outperform.
|
26 |
-
- TTM takes only a few seconds for zeroshot/inference and a few minutes for finetuning in 1 GPU machine.
|
27 |
|
28 |
|
29 |
## Model Description
|
|
|
15 |
|
16 |
## Benchmark Highlights:
|
17 |
|
18 |
+
- TTM outperforms pre-trained *GPT4TS (NeurIPS 23) by 7-12% in few-shot forecasting*.
|
19 |
+
- TTM outperforms pre-trained *LLMTime (NeurIPS 23) by 24% in zero-shot forecasting*.
|
20 |
+
- TTM outperforms pre-trained *SimMTM (NeurIPS 23) by 17% in few-shot forecasting*.
|
21 |
+
- Zero-shot results of TTM often surpass the *few-shot results of many SOTA approaches* including
|
22 |
PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
|
23 |
+
- TTM (1024-96, released in this model card) also outperforms *pre-trained MOIRAI* on FL = 96 by ...
|
24 |
- TTM quick fine-tuning also outperforms the hard statistical baselines (Statistical ensemble and S-Naive) in
|
25 |
M4-hourly dataset which pretrained TS models are finding hard to outperform.
|
26 |
+
- TTM takes only a *few seconds for zeroshot/inference* and a *few minutes for finetuning* in 1 GPU machine.
|
27 |
|
28 |
|
29 |
## Model Description
|