hkivancoral's picture
End of training
40671b3
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_tiny_rms_001_fold5
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6097560975609756

hushem_1x_deit_tiny_rms_001_fold5

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1358
  • Accuracy: 0.6098

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 4.7231 0.2683
4.2141 2.0 12 1.8531 0.2683
4.2141 3.0 18 1.6449 0.2439
1.9845 4.0 24 1.4265 0.2439
1.5807 5.0 30 2.0165 0.2439
1.5807 6.0 36 1.5975 0.2683
1.5979 7.0 42 1.4305 0.3171
1.5979 8.0 48 1.4587 0.2683
1.4992 9.0 54 1.2917 0.3171
1.4954 10.0 60 1.2462 0.4390
1.4954 11.0 66 1.2479 0.2683
1.415 12.0 72 1.1246 0.5122
1.415 13.0 78 1.1689 0.4878
1.374 14.0 84 1.3767 0.2927
1.3675 15.0 90 1.1692 0.4146
1.3675 16.0 96 1.6528 0.2927
1.319 17.0 102 1.3151 0.3659
1.319 18.0 108 1.1475 0.4146
1.3335 19.0 114 1.1506 0.3415
1.2819 20.0 120 1.2300 0.3902
1.2819 21.0 126 1.1641 0.4146
1.2507 22.0 132 1.4148 0.3659
1.2507 23.0 138 1.3061 0.3415
1.2134 24.0 144 1.2367 0.3415
1.2611 25.0 150 1.2383 0.4878
1.2611 26.0 156 1.0375 0.4878
1.2053 27.0 162 1.1983 0.4878
1.2053 28.0 168 1.1898 0.4146
1.1593 29.0 174 1.1479 0.4878
1.2426 30.0 180 1.1382 0.5610
1.2426 31.0 186 1.0558 0.5610
1.1866 32.0 192 1.1895 0.4390
1.1866 33.0 198 1.2172 0.4146
1.1453 34.0 204 1.3773 0.4146
1.1026 35.0 210 1.1168 0.5122
1.1026 36.0 216 1.1184 0.5610
1.131 37.0 222 1.1344 0.5366
1.131 38.0 228 1.0932 0.5122
1.1098 39.0 234 1.1070 0.6098
1.0797 40.0 240 1.1237 0.5854
1.0797 41.0 246 1.1366 0.6098
1.0648 42.0 252 1.1358 0.6098
1.0648 43.0 258 1.1358 0.6098
1.0281 44.0 264 1.1358 0.6098
1.0542 45.0 270 1.1358 0.6098
1.0542 46.0 276 1.1358 0.6098
1.0409 47.0 282 1.1358 0.6098
1.0409 48.0 288 1.1358 0.6098
1.0504 49.0 294 1.1358 0.6098
1.0111 50.0 300 1.1358 0.6098

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1