hkivancoral's picture
End of training
ed51d2b
metadata
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_3x_deit_base_sgd_00001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.43333333333333335

smids_3x_deit_base_sgd_00001_fold4

This model is a fine-tuned version of facebook/deit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0826
  • Accuracy: 0.4333

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1201 1.0 225 1.1112 0.3333
1.1056 2.0 450 1.1099 0.34
1.0987 3.0 675 1.1086 0.3433
1.1099 4.0 900 1.1074 0.355
1.0994 5.0 1125 1.1062 0.3517
1.106 6.0 1350 1.1051 0.3583
1.1031 7.0 1575 1.1040 0.3633
1.1065 8.0 1800 1.1029 0.37
1.0902 9.0 2025 1.1018 0.3683
1.0803 10.0 2250 1.1008 0.3717
1.0894 11.0 2475 1.0998 0.375
1.095 12.0 2700 1.0989 0.3817
1.0882 13.0 2925 1.0979 0.3867
1.0908 14.0 3150 1.0971 0.39
1.1022 15.0 3375 1.0962 0.3917
1.0922 16.0 3600 1.0954 0.395
1.0943 17.0 3825 1.0946 0.3967
1.0851 18.0 4050 1.0938 0.4017
1.0874 19.0 4275 1.0931 0.405
1.0966 20.0 4500 1.0924 0.4083
1.0868 21.0 4725 1.0917 0.4083
1.0765 22.0 4950 1.0910 0.4083
1.0918 23.0 5175 1.0904 0.41
1.0777 24.0 5400 1.0898 0.4183
1.0939 25.0 5625 1.0892 0.42
1.0798 26.0 5850 1.0886 0.4217
1.0858 27.0 6075 1.0881 0.425
1.061 28.0 6300 1.0876 0.4233
1.083 29.0 6525 1.0871 0.425
1.0868 30.0 6750 1.0867 0.425
1.0886 31.0 6975 1.0862 0.4267
1.0841 32.0 7200 1.0858 0.4267
1.0853 33.0 7425 1.0855 0.4283
1.0704 34.0 7650 1.0851 0.4283
1.0702 35.0 7875 1.0848 0.4267
1.0848 36.0 8100 1.0845 0.4283
1.0671 37.0 8325 1.0842 0.4283
1.0578 38.0 8550 1.0840 0.43
1.0817 39.0 8775 1.0837 0.43
1.0866 40.0 9000 1.0835 0.4317
1.083 41.0 9225 1.0833 0.4333
1.0747 42.0 9450 1.0832 0.4333
1.0816 43.0 9675 1.0830 0.4333
1.0657 44.0 9900 1.0829 0.4333
1.0619 45.0 10125 1.0828 0.4333
1.067 46.0 10350 1.0827 0.4333
1.0593 47.0 10575 1.0827 0.4333
1.0587 48.0 10800 1.0826 0.4333
1.0675 49.0 11025 1.0826 0.4333
1.0632 50.0 11250 1.0826 0.4333

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2