metadata
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-beans-demo-v5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.46791907514450864
vit-base-beans-demo-v5
This model is a fine-tuned version of google/vit-large-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:
- Loss: 2.6708
- Accuracy: 0.4679
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
5.9636 | 0.06 | 100 | 5.7983 | 0.1 |
5.8053 | 0.11 | 200 | 5.8683 | 0.1110 |
5.9476 | 0.17 | 300 | 5.9242 | 0.1006 |
5.6866 | 0.23 | 400 | 5.6640 | 0.1110 |
5.5886 | 0.29 | 500 | 5.6032 | 0.1153 |
5.4108 | 0.34 | 600 | 5.5314 | 0.1179 |
5.4427 | 0.4 | 700 | 5.4592 | 0.1188 |
5.1333 | 0.46 | 800 | 5.3569 | 0.1272 |
5.2427 | 0.52 | 900 | 5.2451 | 0.1318 |
5.2185 | 0.57 | 1000 | 5.1948 | 0.1355 |
4.777 | 0.63 | 1100 | 5.1379 | 0.1361 |
5.2378 | 0.69 | 1200 | 5.1043 | 0.1347 |
5.2246 | 0.74 | 1300 | 5.0783 | 0.1419 |
4.9846 | 0.8 | 1400 | 5.0425 | 0.1390 |
5.2708 | 0.86 | 1500 | 5.0202 | 0.1387 |
4.9169 | 0.92 | 1600 | 4.9382 | 0.1526 |
4.8091 | 0.97 | 1700 | 4.8691 | 0.1497 |
4.8795 | 1.03 | 1800 | 4.8124 | 0.1546 |
4.6634 | 1.09 | 1900 | 4.7816 | 0.1601 |
4.4967 | 1.15 | 2000 | 4.7105 | 0.1618 |
4.8389 | 1.2 | 2100 | 4.7104 | 0.1671 |
4.5872 | 1.26 | 2200 | 4.6636 | 0.1607 |
4.7063 | 1.32 | 2300 | 4.6506 | 0.1584 |
4.5526 | 1.38 | 2400 | 4.5932 | 0.1743 |
4.4984 | 1.43 | 2500 | 4.5266 | 0.1792 |
4.2266 | 1.49 | 2600 | 4.4860 | 0.1850 |
4.5827 | 1.55 | 2700 | 4.4237 | 0.1844 |
3.9383 | 1.6 | 2800 | 4.3919 | 0.1887 |
4.5361 | 1.66 | 2900 | 4.3408 | 0.1971 |
4.5067 | 1.72 | 3000 | 4.2708 | 0.1965 |
4.3133 | 1.78 | 3100 | 4.2283 | 0.1997 |
4.4104 | 1.83 | 3200 | 4.1830 | 0.2061 |
3.965 | 1.89 | 3300 | 4.1360 | 0.2133 |
4.3425 | 1.95 | 3400 | 4.0754 | 0.2237 |
3.9526 | 2.01 | 3500 | 4.0885 | 0.2188 |
3.9037 | 2.06 | 3600 | 3.9629 | 0.2396 |
3.6883 | 2.12 | 3700 | 4.0130 | 0.2289 |
3.8445 | 2.18 | 3800 | 3.9220 | 0.2540 |
3.6093 | 2.23 | 3900 | 3.9453 | 0.2353 |
3.7109 | 2.29 | 4000 | 3.8822 | 0.2402 |
3.588 | 2.35 | 4100 | 3.7765 | 0.2679 |
3.4878 | 2.41 | 4200 | 3.7138 | 0.2821 |
3.8276 | 2.46 | 4300 | 3.7137 | 0.2694 |
3.7288 | 2.52 | 4400 | 3.6505 | 0.2821 |
3.4948 | 2.58 | 4500 | 3.6280 | 0.2835 |
3.3436 | 2.64 | 4600 | 3.5212 | 0.3145 |
3.3389 | 2.69 | 4700 | 3.5006 | 0.3208 |
3.4803 | 2.75 | 4800 | 3.4130 | 0.3361 |
3.3953 | 2.81 | 4900 | 3.3506 | 0.3370 |
3.3648 | 2.87 | 5000 | 3.3132 | 0.3462 |
3.1838 | 2.92 | 5100 | 3.2632 | 0.3543 |
3.1927 | 2.98 | 5200 | 3.2335 | 0.3613 |
2.8337 | 3.04 | 5300 | 3.1633 | 0.3760 |
2.6126 | 3.09 | 5400 | 3.1287 | 0.3803 |
2.7718 | 3.15 | 5500 | 3.0715 | 0.3876 |
2.7694 | 3.21 | 5600 | 3.0283 | 0.4040 |
2.7131 | 3.27 | 5700 | 2.9859 | 0.4040 |
2.6204 | 3.32 | 5800 | 2.9461 | 0.4078 |
2.4889 | 3.38 | 5900 | 2.9413 | 0.4081 |
2.5283 | 3.44 | 6000 | 2.9001 | 0.4147 |
2.6986 | 3.5 | 6100 | 2.8428 | 0.4335 |
2.8514 | 3.55 | 6200 | 2.8352 | 0.4399 |
2.2355 | 3.61 | 6300 | 2.7825 | 0.4462 |
2.4485 | 3.67 | 6400 | 2.7580 | 0.4535 |
2.3359 | 3.72 | 6500 | 2.7330 | 0.4549 |
2.5904 | 3.78 | 6600 | 2.7096 | 0.4613 |
2.5366 | 3.84 | 6700 | 2.6906 | 0.4642 |
2.3954 | 3.9 | 6800 | 2.6797 | 0.4691 |
2.3722 | 3.95 | 6900 | 2.6708 | 0.4679 |
Framework versions
- Transformers 4.28.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.13.3