Edit model card

cvt-21-finetuned-brs2

This model is a fine-tuned version of microsoft/cvt-21 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6947
  • Accuracy: 0.6604
  • F1: 0.6087
  • Precision (ppv): 0.5385
  • Recall (sensitivity): 0.7
  • Specificity: 0.6364
  • Npv: 0.7778
  • Auc: 0.6682

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 4
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision (ppv) Recall (sensitivity) Specificity Npv Auc
0.8177 1.89 100 0.7113 0.5283 0.5098 0.4194 0.65 0.4545 0.6818 0.5523
0.736 3.77 200 0.7178 0.5283 0.3902 0.3810 0.4 0.6061 0.625 0.5030
0.5978 5.66 300 0.6889 0.6038 0.5532 0.4815 0.65 0.5758 0.7308 0.6129
0.5576 7.55 400 0.7349 0.4717 0.5484 0.4048 0.85 0.2424 0.7273 0.5462
0.5219 9.43 500 0.6522 0.6038 0.4 0.4667 0.35 0.7576 0.6579 0.5538
0.5326 11.32 600 0.6665 0.6226 0.5238 0.5 0.55 0.6667 0.7097 0.6083
0.4381 13.21 700 0.7685 0.4717 0.5333 0.4 0.8 0.2727 0.6923 0.5364
0.5598 15.09 800 0.7212 0.5283 0.1935 0.2727 0.15 0.7576 0.5952 0.4538
0.6887 16.98 900 0.6985 0.6604 0.64 0.5333 0.8 0.5758 0.8261 0.6879
0.7594 18.87 1000 0.7040 0.5472 0.4286 0.4091 0.45 0.6061 0.6452 0.5280
0.2177 20.75 1100 0.8056 0.4528 0.5397 0.3953 0.85 0.2121 0.7 0.5311
0.4893 22.64 1200 0.8821 0.3396 0.3860 0.2973 0.55 0.2121 0.4375 0.3811
0.5994 24.53 1300 0.8059 0.5660 0.5660 0.4545 0.75 0.4545 0.75 0.6023
0.5179 26.42 1400 0.6750 0.6038 0.4615 0.4737 0.45 0.6970 0.6765 0.5735
0.198 28.3 1500 0.7448 0.3962 0.3333 0.2857 0.4 0.3939 0.52 0.3970
0.6536 30.19 1600 0.7555 0.5094 0.4583 0.3929 0.55 0.4848 0.64 0.5174
0.7558 32.08 1700 0.6664 0.5849 0.4762 0.4545 0.5 0.6364 0.6774 0.5682
0.4915 33.96 1800 0.9213 0.3962 0.5152 0.3696 0.85 0.1212 0.5714 0.4856
0.3661 35.85 1900 0.9202 0.4528 0.4912 0.3784 0.7 0.3030 0.625 0.5015
0.4838 37.74 2000 0.9297 0.4528 0.5085 0.3846 0.75 0.2727 0.6429 0.5114
0.8461 39.62 2100 0.9464 0.4717 0.5758 0.4130 0.95 0.1818 0.8571 0.5659
0.6937 41.51 2200 0.7129 0.5094 0.48 0.4 0.6 0.4545 0.6522 0.5273
0.6302 43.4 2300 0.6866 0.5849 0.6071 0.4722 0.85 0.4242 0.8235 0.6371
0.0793 45.28 2400 0.7791 0.5094 0.5517 0.4211 0.8 0.3333 0.7333 0.5667
0.464 47.17 2500 0.8116 0.4340 0.4444 0.3529 0.6 0.3333 0.5789 0.4667
0.6131 49.06 2600 0.5970 0.6226 0.5455 0.5 0.6 0.6364 0.7241 0.6182
0.6937 50.94 2700 0.8201 0.4340 0.4 0.3333 0.5 0.3939 0.5652 0.4470
0.6552 52.83 2800 0.7168 0.5660 0.5306 0.4483 0.65 0.5152 0.7083 0.5826
0.7749 54.72 2900 0.6875 0.5849 0.5217 0.4615 0.6 0.5758 0.7037 0.5879
0.9482 56.6 3000 0.6392 0.6226 0.6296 0.5 0.85 0.4848 0.8421 0.6674
0.2467 58.49 3100 0.6281 0.6038 0.5333 0.48 0.6 0.6061 0.7143 0.6030
0.2903 60.38 3200 0.7383 0.5472 0.5556 0.4412 0.75 0.4242 0.7368 0.5871
0.5859 62.26 3300 0.7191 0.6226 0.5652 0.5 0.65 0.6061 0.7407 0.6280
0.3815 64.15 3400 0.7469 0.5283 0.4444 0.4 0.5 0.5455 0.6429 0.5227
0.531 66.04 3500 0.7566 0.6226 0.5652 0.5 0.65 0.6061 0.7407 0.6280
0.3892 67.92 3600 0.8168 0.5660 0.5490 0.4516 0.7 0.4848 0.7273 0.5924
0.6487 69.81 3700 0.9077 0.4340 0.4643 0.3611 0.65 0.3030 0.5882 0.4765
0.5525 71.7 3800 0.6961 0.6038 0.5116 0.4783 0.55 0.6364 0.7 0.5932
0.3137 73.58 3900 1.0817 0.3774 0.4590 0.3415 0.7 0.1818 0.5 0.4409
0.3526 75.47 4000 0.7684 0.5472 0.5862 0.4474 0.85 0.3636 0.8 0.6068
0.5938 77.36 4100 0.8786 0.4340 0.4828 0.3684 0.7 0.2727 0.6 0.4864
0.2431 79.25 4200 0.8925 0.4151 0.4746 0.3590 0.7 0.2424 0.5714 0.4712
0.1021 81.13 4300 1.0740 0.4528 0.4727 0.3714 0.65 0.3333 0.6111 0.4917
0.3429 83.02 4400 0.7723 0.4906 0.5091 0.4 0.7 0.3636 0.6667 0.5318
0.3836 84.91 4500 0.7247 0.5472 0.5556 0.4412 0.75 0.4242 0.7368 0.5871
0.4099 86.79 4600 0.8508 0.4340 0.4828 0.3684 0.7 0.2727 0.6 0.4864
0.8264 88.68 4700 0.7682 0.5849 0.5769 0.4688 0.75 0.4848 0.7619 0.6174
0.1928 90.57 4800 0.8738 0.4906 0.5574 0.4146 0.85 0.2727 0.75 0.5614
0.3422 92.45 4900 0.8810 0.5660 0.5965 0.4595 0.85 0.3939 0.8125 0.6220
0.5524 94.34 5000 1.0801 0.3774 0.4923 0.3556 0.8 0.1212 0.5 0.4606
0.464 96.23 5100 0.9417 0.5283 0.5902 0.4390 0.9 0.3030 0.8333 0.6015
0.7182 98.11 5200 1.0335 0.4151 0.4746 0.3590 0.7 0.2424 0.5714 0.4712
0.604 100.0 5300 0.6947 0.6604 0.6087 0.5385 0.7 0.6364 0.7778 0.6682

Framework versions

  • Transformers 4.24.0
  • Pytorch 1.12.1+cu113
  • Datasets 2.6.1
  • Tokenizers 0.13.1
Downloads last month
4
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results