swiftformer-xs / README.md
HorcruxNo13's picture
Model save
60892ff
metadata
base_model: MBZUAI/swiftformer-xs
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
  - precision
  - recall
model-index:
  - name: swiftformer-xs
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: validation
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.57
          - name: Precision
            type: precision
            value: 0.59945
          - name: Recall
            type: recall
            value: 0.57

swiftformer-xs

This model is a fine-tuned version of MBZUAI/swiftformer-xs on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6833
  • Accuracy: 0.57
  • Precision: 0.5995
  • Recall: 0.57
  • F1 Score: 0.5828

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy Precision Recall F1 Score
No log 1.0 4 0.6713 0.6292 0.6454 0.6292 0.6365
No log 2.0 8 0.7142 0.475 0.6155 0.475 0.5020
No log 3.0 12 0.7298 0.425 0.6026 0.425 0.4435
No log 4.0 16 0.7389 0.4792 0.6408 0.4792 0.5023
No log 5.0 20 0.7427 0.4792 0.6408 0.4792 0.5023
No log 6.0 24 0.7235 0.5083 0.6424 0.5083 0.5348
No log 7.0 28 0.6893 0.5875 0.6687 0.5875 0.6107
0.6981 8.0 32 0.6816 0.6042 0.6847 0.6042 0.6264
0.6981 9.0 36 0.6866 0.6042 0.6888 0.6042 0.6266
0.6981 10.0 40 0.7005 0.575 0.6751 0.575 0.5996
0.6981 11.0 44 0.7127 0.525 0.6554 0.525 0.5510
0.6981 12.0 48 0.7098 0.5333 0.6595 0.5333 0.5593
0.6981 13.0 52 0.7126 0.5208 0.6579 0.5208 0.5463
0.6981 14.0 56 0.7114 0.5292 0.6575 0.5292 0.5551
0.6656 15.0 60 0.6908 0.5667 0.6712 0.5667 0.5917
0.6656 16.0 64 0.6804 0.5833 0.6749 0.5833 0.6073
0.6656 17.0 68 0.6806 0.5958 0.6808 0.5958 0.6188
0.6656 18.0 72 0.6884 0.5583 0.6629 0.5583 0.5838
0.6656 19.0 76 0.6821 0.5708 0.6647 0.5708 0.5955
0.6656 20.0 80 0.6663 0.6042 0.6806 0.6042 0.6261
0.6656 21.0 84 0.6717 0.6 0.6787 0.6 0.6223
0.6656 22.0 88 0.6682 0.6083 0.6826 0.6083 0.6299
0.6443 23.0 92 0.6683 0.6167 0.6946 0.6167 0.6381
0.6443 24.0 96 0.6733 0.6 0.6911 0.6 0.6230
0.6443 25.0 100 0.6647 0.6083 0.6866 0.6083 0.6302
0.6443 26.0 104 0.6729 0.6083 0.6907 0.6083 0.6305
0.6443 27.0 108 0.6740 0.6042 0.6930 0.6042 0.6268
0.6443 28.0 112 0.6809 0.5917 0.6916 0.5917 0.6153
0.6443 29.0 116 0.6778 0.6042 0.7017 0.6042 0.6270
0.6313 30.0 120 0.6794 0.5958 0.6935 0.5958 0.6192

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3