ihsansatriawan commited on
Commit
cd8e4c1
1 Parent(s): 50db3ac

End of training

Browse files
Files changed (4) hide show
  1. README.md +39 -14
  2. config.json +1 -1
  3. pytorch_model.bin +2 -2
  4. training_args.bin +2 -2
README.md CHANGED
@@ -1,5 +1,6 @@
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
  datasets:
@@ -21,7 +22,7 @@ model-index:
21
  metrics:
22
  - name: Accuracy
23
  type: accuracy
24
- value: 0.1375
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,8 +32,8 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 2.0892
35
- - Accuracy: 0.1375
36
 
37
  ## Model description
38
 
@@ -51,29 +52,53 @@ More information needed
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
54
- - learning_rate: 5e-05
55
  - train_batch_size: 16
56
  - eval_batch_size: 16
57
  - seed: 42
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
- - num_epochs: 6
61
 
62
  ### Training results
63
 
64
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
65
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
66
- | No log | 1.0 | 40 | 2.0892 | 0.1375 |
67
- | No log | 2.0 | 80 | 2.0891 | 0.1375 |
68
- | No log | 3.0 | 120 | 2.0887 | 0.1375 |
69
- | No log | 4.0 | 160 | 2.0887 | 0.1375 |
70
- | No log | 5.0 | 200 | 2.0885 | 0.1375 |
71
- | No log | 6.0 | 240 | 2.0885 | 0.1375 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
 
74
  ### Framework versions
75
 
76
- - Transformers 4.29.0
77
- - Pytorch 2.0.1
78
- - Datasets 2.14.4
79
  - Tokenizers 0.13.3
 
1
  ---
2
  license: apache-2.0
3
+ base_model: google/vit-base-patch16-224-in21k
4
  tags:
5
  - generated_from_trainer
6
  datasets:
 
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.16875
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
32
 
33
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 2.0822
36
+ - Accuracy: 0.1688
37
 
38
  ## Model description
39
 
 
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
+ - learning_rate: 0.02
56
  - train_batch_size: 16
57
  - eval_batch_size: 16
58
  - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
+ - num_epochs: 30
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
+ | No log | 1.0 | 40 | 2.1729 | 0.1187 |
68
+ | No log | 2.0 | 80 | 2.1526 | 0.0813 |
69
+ | No log | 3.0 | 120 | 2.1301 | 0.0813 |
70
+ | No log | 4.0 | 160 | 2.1663 | 0.1313 |
71
+ | No log | 5.0 | 200 | 2.1524 | 0.0813 |
72
+ | No log | 6.0 | 240 | 2.0822 | 0.1688 |
73
+ | No log | 7.0 | 280 | 2.1661 | 0.1187 |
74
+ | No log | 8.0 | 320 | 2.1294 | 0.1375 |
75
+ | No log | 9.0 | 360 | 2.0832 | 0.1562 |
76
+ | No log | 10.0 | 400 | 2.1144 | 0.1187 |
77
+ | No log | 11.0 | 440 | 2.1037 | 0.1187 |
78
+ | No log | 12.0 | 480 | 2.1001 | 0.1562 |
79
+ | 2.1281 | 13.0 | 520 | 2.1115 | 0.0813 |
80
+ | 2.1281 | 14.0 | 560 | 2.0788 | 0.1187 |
81
+ | 2.1281 | 15.0 | 600 | 2.1156 | 0.0813 |
82
+ | 2.1281 | 16.0 | 640 | 2.1254 | 0.0813 |
83
+ | 2.1281 | 17.0 | 680 | 2.0847 | 0.1688 |
84
+ | 2.1281 | 18.0 | 720 | 2.0966 | 0.0813 |
85
+ | 2.1281 | 19.0 | 760 | 2.1371 | 0.0813 |
86
+ | 2.1281 | 20.0 | 800 | 2.0953 | 0.0813 |
87
+ | 2.1281 | 21.0 | 840 | 2.0928 | 0.0875 |
88
+ | 2.1281 | 22.0 | 880 | 2.1005 | 0.0813 |
89
+ | 2.1281 | 23.0 | 920 | 2.0875 | 0.0875 |
90
+ | 2.1281 | 24.0 | 960 | 2.0953 | 0.0813 |
91
+ | 2.0868 | 25.0 | 1000 | 2.0931 | 0.0875 |
92
+ | 2.0868 | 26.0 | 1040 | 2.0941 | 0.0875 |
93
+ | 2.0868 | 27.0 | 1080 | 2.0949 | 0.0813 |
94
+ | 2.0868 | 28.0 | 1120 | 2.0938 | 0.0875 |
95
+ | 2.0868 | 29.0 | 1160 | 2.0940 | 0.0813 |
96
+ | 2.0868 | 30.0 | 1200 | 2.0938 | 0.0875 |
97
 
98
 
99
  ### Framework versions
100
 
101
+ - Transformers 4.33.2
102
+ - Pytorch 2.0.1+cu118
103
+ - Datasets 2.14.5
104
  - Tokenizers 0.13.3
config.json CHANGED
@@ -40,5 +40,5 @@
40
  "problem_type": "single_label_classification",
41
  "qkv_bias": true,
42
  "torch_dtype": "float32",
43
- "transformers_version": "4.29.0"
44
  }
 
40
  "problem_type": "single_label_classification",
41
  "qkv_bias": true,
42
  "torch_dtype": "float32",
43
+ "transformers_version": "4.33.2"
44
  }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d8e293061bce199dcb54c8f375e3b7d537126027b91b697c98d372b1bec6dcd9
3
- size 343284397
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b984bfb41b464109ba59bab46ae5f3c7bcb825174ef3dba072463fda18c912d6
3
+ size 343287149
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dc0a44155f987c803155000196eceb457fda671a0ddc452aca02089464007b66
3
- size 3963
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:327de8134c460e481b1b119c436145e07d48e878832aac2c2bf06fe2480f0a43
3
+ size 4091