vijaye12 commited on
Commit
c54c504
1 Parent(s): b890706

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -8
README.md CHANGED
@@ -15,16 +15,21 @@ can be easily fine-tuned for your target data. Refer to our [paper](https://arxi
15
 
16
  ## Benchmark Highlights:
17
 
18
- - TTM outperforms pre-trained *GPT4TS (NeurIPS 23) by 7-12% in few-shot forecasting*.
19
- - TTM outperforms pre-trained *LLMTime (NeurIPS 23) by 24% in zero-shot forecasting*.
20
- - TTM outperforms pre-trained *SimMTM (NeurIPS 23) by 17% in few-shot forecasting*.
21
- - Zero-shot results of TTM often surpass the *few-shot results of many SOTA approaches* including
 
 
 
 
22
  PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
23
- - TTM (1024-96, released in this model card) also outperforms *pre-trained MOIRAI* on FL = 96 by ...
 
24
  - TTM quick fine-tuning also outperforms the hard statistical baselines (Statistical ensemble and S-Naive) in
25
  M4-hourly dataset which pretrained TS models are finding hard to outperform.
26
  - TTM takes only a *few seconds for zeroshot/inference* and a *few minutes for finetuning* in 1 GPU machine, as
27
- opposed to long timing-requirements and heavy computing infra needs of other pretrained models.
28
 
29
 
30
  ## Model Description
@@ -74,8 +79,7 @@ Stay tuned for these extended features.
74
  1. Users have to standard scale their data before feeding it to the model (Refer to TSP, our data processing utility for data scaling.)
75
  2. Enabling any upsampling or prepending zeros to virtually increase the context length is not recommended and will
76
  impact the model performance.
77
-
78
-
79
  ### Model Sources [optional]
80
 
81
  <!-- Provide the basic links for the model. -->
 
15
 
16
  ## Benchmark Highlights:
17
 
18
+ - TTM (with less than 1 Million parameters) outperforms the following popular Pre-trained SOTAs demanding several hundred Million to Billions of parameters
19
+ - *GPT4TS (NeurIPS 23) by 12% in few-shot (5%) forecasting.*
20
+ - *LLMTime (NeurIPS 23) by 24% in zero-shot forecasting*.
21
+ - *SimMTM (NeurIPS 23) by 17% in few-shot forecasting*.
22
+ - *Time-LLM (ICLR 24) by 8% in few-shot (5%) forecasting*
23
+ - *UniTime (WWW 24) by 27% in zero-shot forecasting.*
24
+ -
25
+ - Zero-shot results of TTM surpass the *few-shot results of many popular SOTA approaches* including
26
  PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
27
+ - TTM (1024-96, released in this model card with 1M parameters) outperforms pre-trained MOIRAI (Small, 14M parameters) by 10%, MOIRAI (Base, 91M parameters) by 4% and
28
+ MOIRAI (Large, 311M parameters) by 3% on forecast length 96.
29
  - TTM quick fine-tuning also outperforms the hard statistical baselines (Statistical ensemble and S-Naive) in
30
  M4-hourly dataset which pretrained TS models are finding hard to outperform.
31
  - TTM takes only a *few seconds for zeroshot/inference* and a *few minutes for finetuning* in 1 GPU machine, as
32
+ opposed to long timing-requirements and heavy computing infra needs of other existing pretrained models.
33
 
34
 
35
  ## Model Description
 
79
  1. Users have to standard scale their data before feeding it to the model (Refer to TSP, our data processing utility for data scaling.)
80
  2. Enabling any upsampling or prepending zeros to virtually increase the context length is not recommended and will
81
  impact the model performance.
82
+
 
83
  ### Model Sources [optional]
84
 
85
  <!-- Provide the basic links for the model. -->