videomae-base-ipm_all_videos
This model is a fine-tuned version of MCG-NJU/videomae-base on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.4713
- Accuracy: 0.8559
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3600
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
1.7831 | 0.02 | 60 | 1.8965 | 0.1186 |
1.7706 | 1.02 | 120 | 1.9115 | 0.1186 |
1.7497 | 2.02 | 180 | 1.8985 | 0.1356 |
1.5214 | 3.02 | 240 | 1.4807 | 0.3475 |
1.1458 | 4.02 | 300 | 1.7024 | 0.3559 |
1.1587 | 5.02 | 360 | 1.6771 | 0.2966 |
0.9256 | 6.02 | 420 | 1.6428 | 0.3814 |
1.265 | 7.02 | 480 | 1.5169 | 0.5 |
0.8271 | 8.02 | 540 | 1.0310 | 0.5847 |
0.6011 | 9.02 | 600 | 1.1739 | 0.5508 |
0.9542 | 10.02 | 660 | 1.3323 | 0.5424 |
1.1231 | 11.02 | 720 | 1.4279 | 0.4915 |
0.728 | 12.02 | 780 | 2.1913 | 0.4661 |
0.5991 | 13.02 | 840 | 1.1088 | 0.6271 |
1.0613 | 14.02 | 900 | 1.3781 | 0.5 |
0.9121 | 15.02 | 960 | 1.4224 | 0.5424 |
0.6083 | 16.02 | 1020 | 0.8779 | 0.6695 |
0.408 | 17.02 | 1080 | 0.8512 | 0.7119 |
0.3741 | 18.02 | 1140 | 0.8884 | 0.7034 |
0.8906 | 19.02 | 1200 | 1.1396 | 0.6017 |
0.568 | 20.02 | 1260 | 0.7380 | 0.6949 |
0.4135 | 21.02 | 1320 | 0.7966 | 0.6525 |
0.5492 | 22.02 | 1380 | 0.9815 | 0.6780 |
0.902 | 23.02 | 1440 | 0.9267 | 0.6441 |
0.6889 | 24.02 | 1500 | 1.4313 | 0.5763 |
0.788 | 25.02 | 1560 | 1.2156 | 0.5678 |
0.7324 | 26.02 | 1620 | 0.8015 | 0.6780 |
0.6733 | 27.02 | 1680 | 0.8682 | 0.6949 |
0.498 | 28.02 | 1740 | 0.8767 | 0.6949 |
0.5558 | 29.02 | 1800 | 0.9248 | 0.6780 |
0.5583 | 30.02 | 1860 | 1.1784 | 0.6356 |
0.3905 | 31.02 | 1920 | 1.0646 | 0.6864 |
0.3728 | 32.02 | 1980 | 0.8338 | 0.7797 |
0.5988 | 33.02 | 2040 | 0.8339 | 0.7542 |
0.3636 | 34.02 | 2100 | 0.7577 | 0.7627 |
0.505 | 35.02 | 2160 | 1.0310 | 0.6864 |
0.5344 | 36.02 | 2220 | 0.6345 | 0.7458 |
0.2814 | 37.02 | 2280 | 0.9954 | 0.7119 |
0.2187 | 38.02 | 2340 | 0.7515 | 0.7797 |
0.4876 | 39.02 | 2400 | 0.8392 | 0.7627 |
0.1148 | 40.02 | 2460 | 0.6182 | 0.8729 |
0.3139 | 41.02 | 2520 | 1.1651 | 0.6949 |
0.2638 | 42.02 | 2580 | 0.8299 | 0.7797 |
0.1989 | 43.02 | 2640 | 0.5943 | 0.8220 |
0.5473 | 44.02 | 2700 | 0.6514 | 0.8644 |
0.3921 | 45.02 | 2760 | 0.6708 | 0.8220 |
0.1756 | 46.02 | 2820 | 0.5431 | 0.8305 |
0.1089 | 47.02 | 2880 | 0.6040 | 0.8136 |
0.3616 | 48.02 | 2940 | 0.5281 | 0.8475 |
0.2752 | 49.02 | 3000 | 0.6430 | 0.8305 |
0.3847 | 50.02 | 3060 | 0.5640 | 0.8644 |
0.0909 | 51.02 | 3120 | 0.5178 | 0.8559 |
0.3426 | 52.02 | 3180 | 0.3770 | 0.8983 |
0.0516 | 53.02 | 3240 | 0.5365 | 0.8390 |
0.2133 | 54.02 | 3300 | 0.5919 | 0.8475 |
0.1382 | 55.02 | 3360 | 0.5112 | 0.8390 |
0.1803 | 56.02 | 3420 | 0.5173 | 0.8475 |
0.1352 | 57.02 | 3480 | 0.5207 | 0.8390 |
0.4445 | 58.02 | 3540 | 0.4763 | 0.8559 |
0.3249 | 59.02 | 3600 | 0.4713 | 0.8559 |
Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.