hkivancoral's picture
End of training
2f8f133
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_3x_deit_tiny_sgd_00001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.4056761268781302

smids_3x_deit_tiny_sgd_00001_fold1

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0996
  • Accuracy: 0.4057

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.3881 1.0 226 1.3111 0.3489
1.3128 2.0 452 1.2903 0.3523
1.3013 3.0 678 1.2711 0.3589
1.3217 4.0 904 1.2541 0.3656
1.2917 5.0 1130 1.2386 0.3639
1.3196 6.0 1356 1.2247 0.3656
1.2618 7.0 1582 1.2122 0.3673
1.2868 8.0 1808 1.2013 0.3689
1.2007 9.0 2034 1.1914 0.3790
1.1905 10.0 2260 1.1825 0.3856
1.2678 11.0 2486 1.1746 0.3823
1.1575 12.0 2712 1.1675 0.3856
1.1907 13.0 2938 1.1613 0.3840
1.2093 14.0 3164 1.1556 0.3840
1.2019 15.0 3390 1.1505 0.3756
1.1269 16.0 3616 1.1458 0.3756
1.2046 17.0 3842 1.1416 0.3790
1.1582 18.0 4068 1.1378 0.3740
1.1486 19.0 4294 1.1344 0.3806
1.1865 20.0 4520 1.1312 0.3773
1.1413 21.0 4746 1.1283 0.3773
1.1132 22.0 4972 1.1257 0.3856
1.1589 23.0 5198 1.1233 0.3873
1.1721 24.0 5424 1.1210 0.3856
1.1316 25.0 5650 1.1189 0.3856
1.1482 26.0 5876 1.1170 0.3957
1.126 27.0 6102 1.1153 0.4007
1.0926 28.0 6328 1.1137 0.4040
1.1041 29.0 6554 1.1121 0.4023
1.206 30.0 6780 1.1107 0.3973
1.1379 31.0 7006 1.1094 0.3940
1.1454 32.0 7232 1.1082 0.3990
1.1347 33.0 7458 1.1071 0.4040
1.0924 34.0 7684 1.1061 0.4057
1.0887 35.0 7910 1.1052 0.4057
1.1281 36.0 8136 1.1043 0.4057
1.1197 37.0 8362 1.1035 0.4057
1.0883 38.0 8588 1.1028 0.4090
1.1185 39.0 8814 1.1022 0.4090
1.1206 40.0 9040 1.1017 0.4090
1.1449 41.0 9266 1.1012 0.4073
1.0923 42.0 9492 1.1008 0.4057
1.1262 43.0 9718 1.1004 0.4073
1.1447 44.0 9944 1.1002 0.4040
1.11 45.0 10170 1.0999 0.4040
1.1348 46.0 10396 1.0998 0.4057
1.1134 47.0 10622 1.0997 0.4057
1.119 48.0 10848 1.0996 0.4057
1.1325 49.0 11074 1.0996 0.4057
1.1613 50.0 11300 1.0996 0.4057

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.1+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2