wahidww commited on
Commit
7405d19
1 Parent(s): bb525cd

Model save

Browse files
README.md CHANGED
@@ -4,7 +4,7 @@ base_model: microsoft/swin-tiny-patch4-window7-224
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
- - imagefolder
8
  metrics:
9
  - accuracy
10
  model-index:
@@ -14,15 +14,15 @@ model-index:
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
- name: imagefolder
18
- type: imagefolder
19
  config: default
20
  split: train
21
  args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.8653366583541147
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +30,10 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # swin-tiny-patch4-window7-224-finetuned-mobile-eye-tracking-dataset-v2
32
 
33
- This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.3944
36
- - Accuracy: 0.8653
37
 
38
  ## Model description
39
 
@@ -53,40 +53,30 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 5e-05
56
- - train_batch_size: 32
57
- - eval_batch_size: 32
58
  - seed: 42
59
  - gradient_accumulation_steps: 4
60
- - total_train_batch_size: 128
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
- - num_epochs: 15
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 0.3648 | 0.99 | 59 | 0.3998 | 0.8653 |
71
- | 0.3789 | 2.0 | 119 | 0.4005 | 0.8653 |
72
- | 0.3572 | 2.99 | 178 | 0.4006 | 0.8653 |
73
- | 0.3842 | 4.0 | 238 | 0.3905 | 0.8653 |
74
- | 0.356 | 4.99 | 297 | 0.3894 | 0.8653 |
75
- | 0.3564 | 6.0 | 357 | 0.3936 | 0.8653 |
76
- | 0.3668 | 6.99 | 416 | 0.3934 | 0.8653 |
77
- | 0.3538 | 8.0 | 476 | 0.3882 | 0.8653 |
78
- | 0.353 | 8.99 | 535 | 0.3870 | 0.8653 |
79
- | 0.3481 | 10.0 | 595 | 0.3867 | 0.8653 |
80
- | 0.3315 | 10.99 | 654 | 0.3949 | 0.8653 |
81
- | 0.3456 | 12.0 | 714 | 0.3919 | 0.8678 |
82
- | 0.3329 | 12.99 | 773 | 0.3905 | 0.8653 |
83
- | 0.3409 | 14.0 | 833 | 0.3930 | 0.8653 |
84
- | 0.313 | 14.87 | 885 | 0.3944 | 0.8653 |
85
 
86
 
87
  ### Framework versions
88
 
89
- - Transformers 4.35.2
90
- - Pytorch 2.1.0+cu121
91
- - Datasets 2.16.1
92
- - Tokenizers 0.15.0
 
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
+ - image_folder
8
  metrics:
9
  - accuracy
10
  model-index:
 
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
+ name: image_folder
18
+ type: image_folder
19
  config: default
20
  split: train
21
  args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
+ value: 0.9983948635634029
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # swin-tiny-patch4-window7-224-finetuned-mobile-eye-tracking-dataset-v2
32
 
33
+ This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.0101
36
+ - Accuracy: 0.9984
37
 
38
  ## Model description
39
 
 
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 5e-05
56
+ - train_batch_size: 64
57
+ - eval_batch_size: 64
58
  - seed: 42
59
  - gradient_accumulation_steps: 4
60
+ - total_train_batch_size: 256
61
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
+ - num_epochs: 5
65
 
66
  ### Training results
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
+ | 0.5175 | 0.99 | 46 | 0.2164 | 0.9069 |
71
+ | 0.1932 | 1.99 | 92 | 0.0470 | 0.9920 |
72
+ | 0.1321 | 2.98 | 138 | 0.0329 | 0.9920 |
73
+ | 0.0924 | 4.0 | 185 | 0.0158 | 0.9968 |
74
+ | 0.0725 | 4.97 | 230 | 0.0101 | 0.9984 |
 
 
 
 
 
 
 
 
 
 
75
 
76
 
77
  ### Framework versions
78
 
79
+ - Transformers 4.37.0
80
+ - Pytorch 2.1.2
81
+ - Datasets 2.1.0
82
+ - Tokenizers 0.15.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9c924a00f241f7468086f8ebeb51ba02b2acb4305ffd2c52223a59ef91000181
3
  size 110348984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9938044ed402947dc9233e13a3642d399178c7496f4a2d0fc3346c4b842d5e24
3
  size 110348984
runs/Feb23_07-09-09_b3d631a6a648/events.out.tfevents.1708672149.b3d631a6a648.34.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fb0ef676fd853935bce4f7cf67f29faffd18c146ddb65350182730dbfc397970
3
- size 7654
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea60839da7e0b8869709210e9b9357f34cb5cc4cca7a29d5bc660b7355f7006a
3
+ size 8645