vijaye12 commited on
Commit
1655761
1 Parent(s): 48a8cef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -15,11 +15,11 @@ can be easily fine-tuned for your target data. Refer to our [paper](https://arxi
15
 
16
  ## Benchmark Highlights:
17
 
18
- - TTM (with less than 1 Million parameters) outperforms the following popular Pre-trained SOTAs demanding several hundred Million to Billions of parameters
19
- - *GPT4TS (NeurIPS 23) by 12% in few-shot (5%) forecasting.*
20
  - *LLMTime (NeurIPS 23) by 24% in zero-shot forecasting*.
21
  - *SimMTM (NeurIPS 23) by 17% in few-shot forecasting*.
22
- - *Time-LLM (ICLR 24) by 8% in few-shot (5%) forecasting*
23
  - *UniTime (WWW 24) by 27% in zero-shot forecasting.*
24
  - Zero-shot results of TTM surpass the *few-shot results of many popular SOTA approaches* including
25
  PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).
 
15
 
16
  ## Benchmark Highlights:
17
 
18
+ - TTM (with less than 1 Million parameters) outperforms the following popular Pre-trained SOTAs demanding several hundred Million to Billions of parameters:
19
+ - *GPT4TS (NeurIPS 23) by 7-12% in few-shot forecasting.*
20
  - *LLMTime (NeurIPS 23) by 24% in zero-shot forecasting*.
21
  - *SimMTM (NeurIPS 23) by 17% in few-shot forecasting*.
22
+ - *Time-LLM (ICLR 24) by 8% in few-shot(5%) forecasting*
23
  - *UniTime (WWW 24) by 27% in zero-shot forecasting.*
24
  - Zero-shot results of TTM surpass the *few-shot results of many popular SOTA approaches* including
25
  PatchTST (ICLR 23), PatchTSMixer (KDD 23), TimesNet (ICLR 23), DLinear (AAAI 23) and FEDFormer (ICML 22).