Update README.md
Browse files
README.md
CHANGED
@@ -17,8 +17,7 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
|
|
17 |
|
18 |
|
19 |
**The current open-source version supports point forecasting use-cases specifically ranging from minutely to hourly resolutions
|
20 |
-
(Ex. 10 min, 15 min, 1 hour.)
|
21 |
-
prepending zeros to virtually increase context length to 512 or 1024 is not allowed. Please contact us for these resolutions.**
|
22 |
|
23 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
24 |
|
@@ -36,6 +35,12 @@ Stay tuned for the release of the model weights for these newer variants.
|
|
36 |
- Script for Finetuning with cross-channel correlation support - to be added soon
|
37 |
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
## Benchmark Highlights:
|
40 |
|
41 |
- TTM (with less than 1 Million parameters) outperforms the following popular Pre-trained SOTAs demanding several hundred Million to Billions of parameters [paper](https://arxiv.org/pdf/2401.03955v5.pdf):
|
@@ -103,10 +108,7 @@ time-series variates, a critical capability lacking in existing counterparts.
|
|
103 |
In addition, TTM also supports exogenous infusion and categorical data which is not released as part of this version.
|
104 |
Stay tuned for these extended features.
|
105 |
|
106 |
-
|
107 |
-
1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
|
108 |
-
2. Enabling any upsampling or prepending zeros to virtually increase the context length for shorter-length datasets is not recommended and will
|
109 |
-
impact the model performance.
|
110 |
|
111 |
|
112 |
### Model Sources
|
|
|
17 |
|
18 |
|
19 |
**The current open-source version supports point forecasting use-cases specifically ranging from minutely to hourly resolutions
|
20 |
+
(Ex. 10 min, 15 min, 1 hour.).**
|
|
|
21 |
|
22 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
23 |
|
|
|
35 |
- Script for Finetuning with cross-channel correlation support - to be added soon
|
36 |
|
37 |
|
38 |
+
## Recommended Use
|
39 |
+
1. Users have to externally standard scale their data independently for every channel before feeding it to the model (Refer to [TSP](https://github.com/IBM/tsfm/blob/main/tsfm_public/toolkit/time_series_preprocessor.py), our data processing utility for data scaling.)
|
40 |
+
2. The current open-source version supports only minutely and hourly resolutions(Ex. 10 min, 15 min, 1 hour.). Other lower resolutions (say weekly, or monthly) are currently not supported in this version, as the model needs a minimum context length of 512 or 1024.
|
41 |
+
3. Enabling any upsampling or prepending zeros to virtually increase the context length for shorter-length datasets is not recommended and will
|
42 |
+
impact the model performance.
|
43 |
+
|
44 |
## Benchmark Highlights:
|
45 |
|
46 |
- TTM (with less than 1 Million parameters) outperforms the following popular Pre-trained SOTAs demanding several hundred Million to Billions of parameters [paper](https://arxiv.org/pdf/2401.03955v5.pdf):
|
|
|
108 |
In addition, TTM also supports exogenous infusion and categorical data which is not released as part of this version.
|
109 |
Stay tuned for these extended features.
|
110 |
|
111 |
+
|
|
|
|
|
|
|
112 |
|
113 |
|
114 |
### Model Sources
|