Update README.md
Browse files
README.md
CHANGED
@@ -12,20 +12,19 @@ can be easily fine-tuned for your target data. Refer to our [paper](https://arxi
|
|
12 |
|
13 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
14 |
|
|
|
15 |
|
16 |
## Benchmark Highlights:
|
17 |
|
18 |
- TTM outperforms pre-trained GPT4TS (NeurIPS 23) by 7-12% in few-shot forecasting.
|
19 |
- TTM outperforms pre-trained LLMTime (NeurIPS 23) by 24% in zero-shot forecasting.
|
20 |
- TTM outperforms pre-trained SimMTM (NeurIPS 23) by 17% in few-shot forecasting.
|
21 |
-
-
|
22 |
-
learnable parameters, 106X less total parameters, and substantial reductions in fine-tuning (65X),
|
23 |
-
inference time (54X), and memory usage (27X).
|
24 |
-
Zero-shot results of TTM often surpass the few-shot results of many SOTA approaches including
|
25 |
PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
|
26 |
- TTM (1024-96, released in this model card) also outperforms pre-trained MOIRAI on FL = 96 by ...
|
27 |
- TTM quick fine-tuning also outperforms the hard statistical baselines (Statistical ensemble and S-Naive) in
|
28 |
M4-hourly dataset which pretrained TS models are finding hard to outperform.
|
|
|
29 |
|
30 |
|
31 |
## Model Description
|
@@ -61,8 +60,8 @@ TTM-1 currently supports 2 modes:
|
|
61 |
|
62 |
- Finetuned forecasting: Finetune the pre-trained model with your target data to further improve the forecast.
|
63 |
|
64 |
-
**Since, TTM models are extremely small and fast, it is practically very easy to finetune the model with your available target data
|
65 |
-
get more accurate forecasts.**
|
66 |
|
67 |
The current release supports multivariate forecasting via both channel independence and channel-mixing approaches.
|
68 |
Decoder Channel-Mixing can be enabled during fine-tuning for capturing strong channel-correlation patterns across
|
|
|
12 |
|
13 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
14 |
|
15 |
+
TTM Few-shot tuning is
|
16 |
|
17 |
## Benchmark Highlights:
|
18 |
|
19 |
- TTM outperforms pre-trained GPT4TS (NeurIPS 23) by 7-12% in few-shot forecasting.
|
20 |
- TTM outperforms pre-trained LLMTime (NeurIPS 23) by 24% in zero-shot forecasting.
|
21 |
- TTM outperforms pre-trained SimMTM (NeurIPS 23) by 17% in few-shot forecasting.
|
22 |
+
- Zero-shot results of TTM often surpass the few-shot results of many SOTA approaches including
|
|
|
|
|
|
|
23 |
PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
|
24 |
- TTM (1024-96, released in this model card) also outperforms pre-trained MOIRAI on FL = 96 by ...
|
25 |
- TTM quick fine-tuning also outperforms the hard statistical baselines (Statistical ensemble and S-Naive) in
|
26 |
M4-hourly dataset which pretrained TS models are finding hard to outperform.
|
27 |
+
- TTM takes only a few seconds for zeroshot/inference and a few minutes for finetuning in 1 GPU machine.
|
28 |
|
29 |
|
30 |
## Model Description
|
|
|
60 |
|
61 |
- Finetuned forecasting: Finetune the pre-trained model with your target data to further improve the forecast.
|
62 |
|
63 |
+
**Since, TTM models are extremely small and fast, it is practically very easy to finetune the model with your available target data in few minutes
|
64 |
+
to get more accurate forecasts.**
|
65 |
|
66 |
The current release supports multivariate forecasting via both channel independence and channel-mixing approaches.
|
67 |
Decoder Channel-Mixing can be enabled during fine-tuning for capturing strong channel-correlation patterns across
|