wahidww commited on
Commit
396f1e4
1 Parent(s): 412da6e

Model save

Browse files
Files changed (1) hide show
  1. README.md +22 -28
README.md CHANGED
@@ -1,10 +1,11 @@
1
  ---
 
2
  license: apache-2.0
3
  base_model: microsoft/swin-tiny-patch4-window7-224
4
  tags:
5
  - generated_from_trainer
6
  datasets:
7
- - image_folder
8
  metrics:
9
  - accuracy
10
  model-index:
@@ -14,15 +15,15 @@ model-index:
14
  name: Image Classification
15
  type: image-classification
16
  dataset:
17
- name: image_folder
18
- type: image_folder
19
  config: default
20
  split: train
21
  args: default
22
  metrics:
23
  - name: Accuracy
24
  type: accuracy
25
- value: 0.598512173128945
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -30,10 +31,10 @@ should probably proofread and complete it, then remove this comment. -->
30
 
31
  # swin-tiny-patch4-window7-224-finetuned-mobile-eye-tracking-dataset-v2
32
 
33
- This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 0.6732
36
- - Accuracy: 0.5985
37
 
38
  ## Model description
39
 
@@ -53,35 +54,28 @@ More information needed
53
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 5e-05
56
- - train_batch_size: 64
57
- - eval_batch_size: 64
58
  - seed: 42
59
  - gradient_accumulation_steps: 4
60
- - total_train_batch_size: 256
61
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
  - lr_scheduler_type: linear
63
  - lr_scheduler_warmup_ratio: 0.1
64
- - num_epochs: 10
65
 
66
  ### Training results
67
 
68
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
69
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 0.6726 | 1.0 | 329 | 0.6758 | 0.5985 |
71
- | 0.6773 | 2.0 | 658 | 0.6738 | 0.5985 |
72
- | 0.6701 | 3.0 | 987 | 0.6736 | 0.5985 |
73
- | 0.6734 | 4.0 | 1317 | 0.6735 | 0.5985 |
74
- | 0.671 | 5.0 | 1646 | 0.6738 | 0.5985 |
75
- | 0.6725 | 6.0 | 1975 | 0.6740 | 0.5985 |
76
- | 0.6702 | 7.0 | 2304 | 0.6737 | 0.5985 |
77
- | 0.6708 | 8.0 | 2634 | 0.6733 | 0.5983 |
78
- | 0.6732 | 9.0 | 2963 | 0.6735 | 0.5985 |
79
- | 0.671 | 9.99 | 3290 | 0.6732 | 0.5985 |
80
 
81
 
82
  ### Framework versions
83
 
84
- - Transformers 4.37.0
85
- - Pytorch 2.1.2
86
- - Datasets 2.1.0
87
- - Tokenizers 0.15.1
 
1
  ---
2
+ library_name: transformers
3
  license: apache-2.0
4
  base_model: microsoft/swin-tiny-patch4-window7-224
5
  tags:
6
  - generated_from_trainer
7
  datasets:
8
+ - imagefolder
9
  metrics:
10
  - accuracy
11
  model-index:
 
15
  name: Image Classification
16
  type: image-classification
17
  dataset:
18
+ name: imagefolder
19
+ type: imagefolder
20
  config: default
21
  split: train
22
  args: default
23
  metrics:
24
  - name: Accuracy
25
  type: accuracy
26
+ value: 0.7490636704119851
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  # swin-tiny-patch4-window7-224-finetuned-mobile-eye-tracking-dataset-v2
33
 
34
+ This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.7276
37
+ - Accuracy: 0.7491
38
 
39
  ## Model description
40
 
 
54
 
55
  The following hyperparameters were used during training:
56
  - learning_rate: 5e-05
57
+ - train_batch_size: 32
58
+ - eval_batch_size: 32
59
  - seed: 42
60
  - gradient_accumulation_steps: 4
61
+ - total_train_batch_size: 128
62
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_ratio: 0.1
65
+ - num_epochs: 3
66
 
67
  ### Training results
68
 
69
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
70
+ |:-------------:|:------:|:----:|:---------------:|:--------:|
71
+ | 1.2929 | 0.9811 | 39 | 1.0328 | 0.6404 |
72
+ | 0.9974 | 1.9874 | 79 | 0.7795 | 0.7416 |
73
+ | 0.9114 | 2.9434 | 117 | 0.7276 | 0.7491 |
 
 
 
 
 
 
 
74
 
75
 
76
  ### Framework versions
77
 
78
+ - Transformers 4.46.2
79
+ - Pytorch 2.5.1+cu121
80
+ - Datasets 3.1.0
81
+ - Tokenizers 0.20.3