microsoft-swinv2-base-patch4-window16-256-batch32-lr5e-05-standford-dogs
This model is a fine-tuned version of microsoft/swinv2-base-patch4-window16-256 on the stanford-dogs dataset. It achieves the following results on the evaluation set:
- Loss: 0.1856
- Accuracy: 0.9468
- F1: 0.9450
- Precision: 0.9480
- Recall: 0.9453
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
---|---|---|---|---|---|---|---|
4.7437 | 0.0777 | 10 | 4.6395 | 0.0862 | 0.0519 | 0.0499 | 0.0821 |
4.5551 | 0.1553 | 20 | 4.3696 | 0.1713 | 0.1162 | 0.1608 | 0.1573 |
4.2151 | 0.2330 | 30 | 3.8252 | 0.3188 | 0.2681 | 0.4133 | 0.3021 |
3.5619 | 0.3107 | 40 | 2.8929 | 0.6368 | 0.5785 | 0.6552 | 0.6211 |
2.6253 | 0.3883 | 50 | 1.8693 | 0.7850 | 0.7538 | 0.7906 | 0.7733 |
1.8818 | 0.4660 | 60 | 1.1203 | 0.8542 | 0.8406 | 0.8667 | 0.8468 |
1.3652 | 0.5437 | 70 | 0.7330 | 0.8880 | 0.8780 | 0.9039 | 0.8850 |
1.0456 | 0.6214 | 80 | 0.5269 | 0.9084 | 0.9015 | 0.9101 | 0.9050 |
0.9039 | 0.6990 | 90 | 0.4139 | 0.9181 | 0.9093 | 0.9213 | 0.9150 |
0.7965 | 0.7767 | 100 | 0.3441 | 0.9249 | 0.9181 | 0.9315 | 0.9221 |
0.7053 | 0.8544 | 110 | 0.3184 | 0.9225 | 0.9163 | 0.9320 | 0.9208 |
0.6907 | 0.9320 | 120 | 0.2870 | 0.9283 | 0.9261 | 0.9324 | 0.9270 |
0.6293 | 1.0097 | 130 | 0.2760 | 0.9276 | 0.9245 | 0.9329 | 0.9260 |
0.5564 | 1.0874 | 140 | 0.2517 | 0.9339 | 0.9308 | 0.9362 | 0.9320 |
0.5902 | 1.1650 | 150 | 0.2500 | 0.9351 | 0.9308 | 0.9371 | 0.9328 |
0.5269 | 1.2427 | 160 | 0.2429 | 0.9334 | 0.9307 | 0.9370 | 0.9317 |
0.5148 | 1.3204 | 170 | 0.2358 | 0.9393 | 0.9368 | 0.9407 | 0.9373 |
0.4998 | 1.3981 | 180 | 0.2451 | 0.9310 | 0.9270 | 0.9357 | 0.9283 |
0.4797 | 1.4757 | 190 | 0.2425 | 0.9325 | 0.9287 | 0.9377 | 0.9315 |
0.4933 | 1.5534 | 200 | 0.2360 | 0.9281 | 0.9257 | 0.9333 | 0.9266 |
0.4414 | 1.6311 | 210 | 0.2201 | 0.9371 | 0.9343 | 0.9398 | 0.9351 |
0.4401 | 1.7087 | 220 | 0.2248 | 0.9346 | 0.9327 | 0.9375 | 0.9337 |
0.4023 | 1.7864 | 230 | 0.2199 | 0.9344 | 0.9282 | 0.9381 | 0.9317 |
0.4723 | 1.8641 | 240 | 0.2071 | 0.9419 | 0.9389 | 0.9437 | 0.9401 |
0.4593 | 1.9417 | 250 | 0.2123 | 0.9402 | 0.9371 | 0.9421 | 0.9382 |
0.4544 | 2.0194 | 260 | 0.2191 | 0.9385 | 0.9347 | 0.9396 | 0.9371 |
0.3871 | 2.0971 | 270 | 0.2158 | 0.9395 | 0.9372 | 0.9401 | 0.9378 |
0.4162 | 2.1748 | 280 | 0.2073 | 0.9385 | 0.9353 | 0.9396 | 0.9364 |
0.3774 | 2.2524 | 290 | 0.1981 | 0.9397 | 0.9387 | 0.9422 | 0.9387 |
0.3895 | 2.3301 | 300 | 0.2008 | 0.9400 | 0.9361 | 0.9395 | 0.9376 |
0.3804 | 2.4078 | 310 | 0.2018 | 0.9431 | 0.9396 | 0.9443 | 0.9412 |
0.3783 | 2.4854 | 320 | 0.2038 | 0.9422 | 0.9384 | 0.9439 | 0.9403 |
0.4376 | 2.5631 | 330 | 0.1968 | 0.9419 | 0.9404 | 0.9459 | 0.9414 |
0.3696 | 2.6408 | 340 | 0.2011 | 0.9441 | 0.9422 | 0.9464 | 0.9430 |
0.3954 | 2.7184 | 350 | 0.1997 | 0.9417 | 0.9379 | 0.9430 | 0.9399 |
0.3651 | 2.7961 | 360 | 0.1952 | 0.9434 | 0.9392 | 0.9407 | 0.9415 |
0.3646 | 2.8738 | 370 | 0.2045 | 0.9429 | 0.9391 | 0.9459 | 0.9413 |
0.3532 | 2.9515 | 380 | 0.1991 | 0.9427 | 0.9394 | 0.9455 | 0.9413 |
0.342 | 3.0291 | 390 | 0.1958 | 0.9410 | 0.9399 | 0.9441 | 0.9404 |
0.3706 | 3.1068 | 400 | 0.2010 | 0.9419 | 0.9401 | 0.9442 | 0.9406 |
0.3031 | 3.1845 | 410 | 0.2013 | 0.9424 | 0.9407 | 0.9449 | 0.9410 |
0.3345 | 3.2621 | 420 | 0.2022 | 0.9414 | 0.9399 | 0.9438 | 0.9406 |
0.3356 | 3.3398 | 430 | 0.1927 | 0.9470 | 0.9451 | 0.9500 | 0.9453 |
0.3538 | 3.4175 | 440 | 0.1927 | 0.9446 | 0.9422 | 0.9472 | 0.9430 |
0.3505 | 3.4951 | 450 | 0.1909 | 0.9480 | 0.9461 | 0.9498 | 0.9466 |
0.3398 | 3.5728 | 460 | 0.1917 | 0.9453 | 0.9419 | 0.9475 | 0.9436 |
0.3303 | 3.6505 | 470 | 0.1895 | 0.9483 | 0.9453 | 0.9506 | 0.9464 |
0.3685 | 3.7282 | 480 | 0.1883 | 0.9458 | 0.9442 | 0.9468 | 0.9445 |
0.3125 | 3.8058 | 490 | 0.1926 | 0.9441 | 0.9422 | 0.9462 | 0.9426 |
0.3857 | 3.8835 | 500 | 0.1911 | 0.9446 | 0.9426 | 0.9473 | 0.9430 |
0.3407 | 3.9612 | 510 | 0.1825 | 0.9470 | 0.9454 | 0.9486 | 0.9459 |
0.3545 | 4.0388 | 520 | 0.1919 | 0.9444 | 0.9428 | 0.9448 | 0.9432 |
0.306 | 4.1165 | 530 | 0.1901 | 0.9466 | 0.9437 | 0.9471 | 0.9450 |
0.2511 | 4.1942 | 540 | 0.2026 | 0.9431 | 0.9388 | 0.9448 | 0.9410 |
0.3233 | 4.2718 | 550 | 0.1950 | 0.9453 | 0.9433 | 0.9470 | 0.9438 |
0.2793 | 4.3495 | 560 | 0.1973 | 0.9453 | 0.9437 | 0.9466 | 0.9444 |
0.3035 | 4.4272 | 570 | 0.1944 | 0.9470 | 0.9454 | 0.9491 | 0.9459 |
0.2776 | 4.5049 | 580 | 0.2030 | 0.9412 | 0.9393 | 0.9445 | 0.9398 |
0.3204 | 4.5825 | 590 | 0.1959 | 0.9441 | 0.9417 | 0.9468 | 0.9428 |
0.2868 | 4.6602 | 600 | 0.1959 | 0.9429 | 0.9413 | 0.9437 | 0.9414 |
0.3325 | 4.7379 | 610 | 0.1991 | 0.9414 | 0.9389 | 0.9435 | 0.9401 |
0.3255 | 4.8155 | 620 | 0.1894 | 0.9441 | 0.9425 | 0.9448 | 0.9431 |
0.2744 | 4.8932 | 630 | 0.1915 | 0.9434 | 0.9411 | 0.9434 | 0.9421 |
0.2945 | 4.9709 | 640 | 0.1932 | 0.9453 | 0.9415 | 0.9468 | 0.9436 |
0.253 | 5.0485 | 650 | 0.1928 | 0.9448 | 0.9423 | 0.9465 | 0.9435 |
0.2614 | 5.1262 | 660 | 0.1942 | 0.9451 | 0.9441 | 0.9478 | 0.9444 |
0.2699 | 5.2039 | 670 | 0.1924 | 0.9468 | 0.9433 | 0.9479 | 0.9451 |
0.2839 | 5.2816 | 680 | 0.1894 | 0.9461 | 0.9442 | 0.9475 | 0.9447 |
0.2353 | 5.3592 | 690 | 0.1947 | 0.9427 | 0.9407 | 0.9435 | 0.9410 |
0.2627 | 5.4369 | 700 | 0.1964 | 0.9419 | 0.9405 | 0.9440 | 0.9409 |
0.2592 | 5.5146 | 710 | 0.1893 | 0.9456 | 0.9440 | 0.9468 | 0.9441 |
0.2634 | 5.5922 | 720 | 0.1918 | 0.9458 | 0.9431 | 0.9473 | 0.9443 |
0.294 | 5.6699 | 730 | 0.1922 | 0.9446 | 0.9417 | 0.9457 | 0.9428 |
0.2565 | 5.7476 | 740 | 0.1907 | 0.9456 | 0.9432 | 0.9469 | 0.9439 |
0.2657 | 5.8252 | 750 | 0.1902 | 0.9453 | 0.9415 | 0.9464 | 0.9434 |
0.2945 | 5.9029 | 760 | 0.1872 | 0.9453 | 0.9427 | 0.9457 | 0.9439 |
0.2758 | 5.9806 | 770 | 0.1855 | 0.9444 | 0.9432 | 0.9460 | 0.9430 |
0.226 | 6.0583 | 780 | 0.1867 | 0.9470 | 0.9456 | 0.9488 | 0.9457 |
0.2105 | 6.1359 | 790 | 0.1866 | 0.9470 | 0.9446 | 0.9482 | 0.9451 |
0.2524 | 6.2136 | 800 | 0.1891 | 0.9456 | 0.9441 | 0.9470 | 0.9441 |
0.2987 | 6.2913 | 810 | 0.1879 | 0.9463 | 0.9442 | 0.9472 | 0.9447 |
0.2393 | 6.3689 | 820 | 0.1876 | 0.9456 | 0.9439 | 0.9467 | 0.9442 |
0.2779 | 6.4466 | 830 | 0.1870 | 0.9473 | 0.9460 | 0.9486 | 0.9463 |
0.3117 | 6.5243 | 840 | 0.1866 | 0.9470 | 0.9450 | 0.9483 | 0.9455 |
0.2574 | 6.6019 | 850 | 0.1853 | 0.9468 | 0.9449 | 0.9481 | 0.9454 |
0.2307 | 6.6796 | 860 | 0.1886 | 0.9463 | 0.9441 | 0.9475 | 0.9447 |
0.2771 | 6.7573 | 870 | 0.1878 | 0.9456 | 0.9437 | 0.9464 | 0.9440 |
0.2575 | 6.8350 | 880 | 0.1868 | 0.9458 | 0.9440 | 0.9465 | 0.9443 |
0.2422 | 6.9126 | 890 | 0.1857 | 0.9463 | 0.9447 | 0.9466 | 0.9448 |
0.2564 | 6.9903 | 900 | 0.1861 | 0.9451 | 0.9434 | 0.9458 | 0.9437 |
0.222 | 7.0680 | 910 | 0.1866 | 0.9461 | 0.9442 | 0.9471 | 0.9445 |
0.2467 | 7.1456 | 920 | 0.1862 | 0.9456 | 0.9438 | 0.9464 | 0.9441 |
0.2412 | 7.2233 | 930 | 0.1860 | 0.9463 | 0.9449 | 0.9474 | 0.9451 |
0.2518 | 7.3010 | 940 | 0.1857 | 0.9458 | 0.9442 | 0.9466 | 0.9445 |
0.2811 | 7.3786 | 950 | 0.1857 | 0.9463 | 0.9446 | 0.9472 | 0.9448 |
0.2255 | 7.4563 | 960 | 0.1856 | 0.9468 | 0.9451 | 0.9477 | 0.9453 |
0.2425 | 7.5340 | 970 | 0.1857 | 0.9466 | 0.9449 | 0.9478 | 0.9451 |
0.2352 | 7.6117 | 980 | 0.1856 | 0.9468 | 0.9450 | 0.9480 | 0.9453 |
0.2328 | 7.6893 | 990 | 0.1855 | 0.9468 | 0.9450 | 0.9480 | 0.9453 |
0.2353 | 7.7670 | 1000 | 0.1856 | 0.9468 | 0.9450 | 0.9480 | 0.9453 |
Framework versions
- Transformers 4.40.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.19.1
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for amaye15/microsoft-swinv2-base-patch4-window16-256-batch32-lr5e-05-standford-dogs
Base model
microsoft/swinv2-base-patch4-window16-256Evaluation results
- Accuracy on stanford-dogsself-reported0.947
- F1 on stanford-dogsself-reported0.945
- Precision on stanford-dogsself-reported0.948
- Recall on stanford-dogsself-reported0.945