Update README.md
Browse files
README.md
CHANGED
@@ -21,6 +21,16 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
|
|
21 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
22 |
|
23 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
## Benchmark Highlights:
|
25 |
|
26 |
- TTM (with less than 1 Million parameters) outperforms the following popular Pre-trained SOTAs demanding several hundred Million to Billions of parameters [paper](https://arxiv.org/pdf/2401.03955.pdf):
|
@@ -37,7 +47,8 @@ fine-tuned for multi-variate forecasts with just 5% of the training data to be c
|
|
37 |
M4-hourly dataset which existing pretrained TS models are finding difficult to outperform. [[notebook]](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_m4_hourly.ipynb)
|
38 |
- TTM takes only a *few seconds for zeroshot/inference* and a *few minutes for finetuning* in 1 GPU machine, as
|
39 |
opposed to long timing-requirements and heavy computing infra needs of other existing pre-trained models.
|
40 |
-
|
|
|
41 |
|
42 |
## Model Description
|
43 |
|
@@ -138,15 +149,6 @@ fewshot_output = finetune_forecast_trainer.evaluate(dset_test)
|
|
138 |
```
|
139 |
|
140 |
|
141 |
-
|
142 |
-
## How to Get Started with the Model
|
143 |
-
|
144 |
-
- [colab](https://github.com/IBM/tsfm/blob/main/notebooks/tutorial/ttm_tutorial.ipynb)
|
145 |
-
- [Getting Started Notebook](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb)
|
146 |
-
- [512-96 Benchmarks](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_512_96.ipynb)
|
147 |
-
- [1024-96 Benchmarks](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_1024_96.ipynb)
|
148 |
-
- Script for Finetuning with cross-channel correlation support - to be added soon
|
149 |
-
|
150 |
## Training Data
|
151 |
|
152 |
The TTM models were trained on a collection of datasets from the Monash Time Series Forecasting repository. The datasets used include:
|
|
|
21 |
**Note that zeroshot, fine-tuning and inference tasks using TTM can easily be executed in 1 GPU machine or in laptops too!!**
|
22 |
|
23 |
|
24 |
+
|
25 |
+
## How to Get Started with the Model
|
26 |
+
|
27 |
+
- [colab](https://github.com/IBM/tsfm/blob/main/notebooks/tutorial/ttm_tutorial.ipynb)
|
28 |
+
- [Getting Started Notebook](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/ttm_getting_started.ipynb)
|
29 |
+
- [512-96 Benchmarks](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_512_96.ipynb)
|
30 |
+
- [1024-96 Benchmarks](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_benchmarking_1024_96.ipynb)
|
31 |
+
- Script for Finetuning with cross-channel correlation support - to be added soon
|
32 |
+
|
33 |
+
|
34 |
## Benchmark Highlights:
|
35 |
|
36 |
- TTM (with less than 1 Million parameters) outperforms the following popular Pre-trained SOTAs demanding several hundred Million to Billions of parameters [paper](https://arxiv.org/pdf/2401.03955.pdf):
|
|
|
47 |
M4-hourly dataset which existing pretrained TS models are finding difficult to outperform. [[notebook]](https://github.com/IBM/tsfm/blob/main/notebooks/hfdemo/tinytimemixer/ttm_m4_hourly.ipynb)
|
48 |
- TTM takes only a *few seconds for zeroshot/inference* and a *few minutes for finetuning* in 1 GPU machine, as
|
49 |
opposed to long timing-requirements and heavy computing infra needs of other existing pre-trained models.
|
50 |
+
|
51 |
+
|
52 |
|
53 |
## Model Description
|
54 |
|
|
|
149 |
```
|
150 |
|
151 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
152 |
## Training Data
|
153 |
|
154 |
The TTM models were trained on a collection of datasets from the Monash Time Series Forecasting repository. The datasets used include:
|