amiqinayat
commited on
Commit
•
f25c87a
1
Parent(s):
ed1392a
Model save
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ model-index:
|
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
-
value: 0.
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -32,8 +32,23 @@ should probably proofread and complete it, then remove this comment. -->
|
|
32 |
|
33 |
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
-
- Loss: 0.
|
36 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
## Model description
|
39 |
|
@@ -65,16 +80,16 @@ The following hyperparameters were used during training:
|
|
65 |
|
66 |
### Training results
|
67 |
|
68 |
-
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
69 |
-
|
70 |
-
| 0.
|
71 |
-
| 0.
|
72 |
-
| 0.
|
73 |
|
74 |
|
75 |
### Framework versions
|
76 |
|
77 |
-
- Transformers 4.
|
78 |
- Pytorch 2.0.1+cu118
|
79 |
-
- Datasets 2.14.
|
80 |
- Tokenizers 0.13.3
|
|
|
22 |
metrics:
|
23 |
- name: Accuracy
|
24 |
type: accuracy
|
25 |
+
value: 0.8084291187739464
|
26 |
---
|
27 |
|
28 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
32 |
|
33 |
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
|
34 |
It achieves the following results on the evaluation set:
|
35 |
+
- Loss: 0.5493
|
36 |
+
- Crack: {'precision': 0.5735294117647058, 'recall': 0.6964285714285714, 'f1-score': 0.6290322580645161, 'support': 56}
|
37 |
+
- Environment - ground: {'precision': 0.9722222222222222, 'recall': 0.9722222222222222, 'f1-score': 0.9722222222222222, 'support': 36}
|
38 |
+
- Environment - other: {'precision': 0.8913043478260869, 'recall': 0.9318181818181818, 'f1-score': 0.9111111111111111, 'support': 44}
|
39 |
+
- Environment - sky: {'precision': 0.8974358974358975, 'recall': 0.9722222222222222, 'f1-score': 0.9333333333333333, 'support': 36}
|
40 |
+
- Environment - vegetation: {'precision': 0.9622641509433962, 'recall': 0.9622641509433962, 'f1-score': 0.9622641509433962, 'support': 53}
|
41 |
+
- Joint defect: {'precision': 0.6470588235294118, 'recall': 0.7857142857142857, 'f1-score': 0.7096774193548386, 'support': 28}
|
42 |
+
- Loss of section: {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3}
|
43 |
+
- Spalling: {'precision': 0.5161290322580645, 'recall': 0.41025641025641024, 'f1-score': 0.4571428571428572, 'support': 39}
|
44 |
+
- Vegetation: {'precision': 0.8387096774193549, 'recall': 0.8813559322033898, 'f1-score': 0.859504132231405, 'support': 59}
|
45 |
+
- Wall - grafitti: {'precision': 0.96, 'recall': 0.96, 'f1-score': 0.96, 'support': 25}
|
46 |
+
- Wall - normal: {'precision': 0.7575757575757576, 'recall': 0.625, 'f1-score': 0.6849315068493151, 'support': 40}
|
47 |
+
- Wall - other: {'precision': 0.8888888888888888, 'recall': 0.8333333333333334, 'f1-score': 0.8602150537634408, 'support': 48}
|
48 |
+
- Wall - stain: {'precision': 0.84, 'recall': 0.7636363636363637, 'f1-score': 0.8000000000000002, 'support': 55}
|
49 |
+
- Accuracy: 0.8084
|
50 |
+
- Macro avg: {'precision': 0.7496244776818297, 'recall': 0.753403974906029, 'f1-score': 0.7491872342320336, 'support': 522}
|
51 |
+
- Weighted avg: {'precision': 0.805637888745576, 'recall': 0.8084291187739464, 'f1-score': 0.8046217646882747, 'support': 522}
|
52 |
|
53 |
## Model description
|
54 |
|
|
|
80 |
|
81 |
### Training results
|
82 |
|
83 |
+
| Training Loss | Epoch | Step | Validation Loss | Crack | Environment - ground | Environment - other | Environment - sky | Environment - vegetation | Joint defect | Loss of section | Spalling | Vegetation | Wall - grafitti | Wall - normal | Wall - other | Wall - stain | Accuracy | Macro avg | Weighted avg |
|
84 |
+
|:-------------:|:-----:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:--------:|:---------------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------------:|
|
85 |
+
| 0.8877 | 0.99 | 146 | 0.8076 | {'precision': 0.5294117647058824, 'recall': 0.6428571428571429, 'f1-score': 0.5806451612903226, 'support': 56} | {'precision': 0.9459459459459459, 'recall': 0.9722222222222222, 'f1-score': 0.9589041095890412, 'support': 36} | {'precision': 0.7916666666666666, 'recall': 0.8636363636363636, 'f1-score': 0.8260869565217391, 'support': 44} | {'precision': 0.8780487804878049, 'recall': 1.0, 'f1-score': 0.9350649350649352, 'support': 36} | {'precision': 0.9807692307692307, 'recall': 0.9622641509433962, 'f1-score': 0.9714285714285713, 'support': 53} | {'precision': 0.7037037037037037, 'recall': 0.6785714285714286, 'f1-score': 0.6909090909090909, 'support': 28} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3} | {'precision': 0.5588235294117647, 'recall': 0.48717948717948717, 'f1-score': 0.5205479452054794, 'support': 39} | {'precision': 0.6625, 'recall': 0.8983050847457628, 'f1-score': 0.762589928057554, 'support': 59} | {'precision': 0.7666666666666667, 'recall': 0.92, 'f1-score': 0.8363636363636363, 'support': 25} | {'precision': 0.9411764705882353, 'recall': 0.4, 'f1-score': 0.5614035087719298, 'support': 40} | {'precision': 0.8780487804878049, 'recall': 0.75, 'f1-score': 0.8089887640449439, 'support': 48} | {'precision': 0.7659574468085106, 'recall': 0.6545454545454545, 'f1-score': 0.7058823529411765, 'support': 55} | 0.7625 | {'precision': 0.7232860758647859, 'recall': 0.7099677949770199, 'f1-score': 0.7045242277068015, 'support': 522} | {'precision': 0.7735594241725829, 'recall': 0.7624521072796935, 'f1-score': 0.7551578669008161, 'support': 522} |
|
86 |
+
| 0.8113 | 2.0 | 293 | 0.6101 | {'precision': 0.5555555555555556, 'recall': 0.7142857142857143, 'f1-score': 0.6250000000000001, 'support': 56} | {'precision': 0.9714285714285714, 'recall': 0.9444444444444444, 'f1-score': 0.9577464788732395, 'support': 36} | {'precision': 0.8888888888888888, 'recall': 0.9090909090909091, 'f1-score': 0.8988764044943819, 'support': 44} | {'precision': 0.8974358974358975, 'recall': 0.9722222222222222, 'f1-score': 0.9333333333333333, 'support': 36} | {'precision': 0.9622641509433962, 'recall': 0.9622641509433962, 'f1-score': 0.9622641509433962, 'support': 53} | {'precision': 0.7857142857142857, 'recall': 0.7857142857142857, 'f1-score': 0.7857142857142857, 'support': 28} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3} | {'precision': 0.4117647058823529, 'recall': 0.5384615384615384, 'f1-score': 0.4666666666666667, 'support': 39} | {'precision': 0.8153846153846154, 'recall': 0.8983050847457628, 'f1-score': 0.8548387096774194, 'support': 59} | {'precision': 0.7741935483870968, 'recall': 0.96, 'f1-score': 0.8571428571428571, 'support': 25} | {'precision': 0.75, 'recall': 0.45, 'f1-score': 0.5625000000000001, 'support': 40} | {'precision': 0.9024390243902439, 'recall': 0.7708333333333334, 'f1-score': 0.8314606741573034, 'support': 48} | {'precision': 0.8157894736842105, 'recall': 0.5636363636363636, 'f1-score': 0.6666666666666666, 'support': 55} | 0.7778 | {'precision': 0.7331429782842396, 'recall': 0.7284044651444593, 'f1-score': 0.7232469405899653, 'support': 522} | {'precision': 0.7896708656541912, 'recall': 0.7777777777777778, 'f1-score': 0.7754219719596664, 'support': 522} |
|
87 |
+
| 0.6069 | 2.98 | 438 | 0.5493 | {'precision': 0.5735294117647058, 'recall': 0.6964285714285714, 'f1-score': 0.6290322580645161, 'support': 56} | {'precision': 0.9722222222222222, 'recall': 0.9722222222222222, 'f1-score': 0.9722222222222222, 'support': 36} | {'precision': 0.8913043478260869, 'recall': 0.9318181818181818, 'f1-score': 0.9111111111111111, 'support': 44} | {'precision': 0.8974358974358975, 'recall': 0.9722222222222222, 'f1-score': 0.9333333333333333, 'support': 36} | {'precision': 0.9622641509433962, 'recall': 0.9622641509433962, 'f1-score': 0.9622641509433962, 'support': 53} | {'precision': 0.6470588235294118, 'recall': 0.7857142857142857, 'f1-score': 0.7096774193548386, 'support': 28} | {'precision': 0.0, 'recall': 0.0, 'f1-score': 0.0, 'support': 3} | {'precision': 0.5161290322580645, 'recall': 0.41025641025641024, 'f1-score': 0.4571428571428572, 'support': 39} | {'precision': 0.8387096774193549, 'recall': 0.8813559322033898, 'f1-score': 0.859504132231405, 'support': 59} | {'precision': 0.96, 'recall': 0.96, 'f1-score': 0.96, 'support': 25} | {'precision': 0.7575757575757576, 'recall': 0.625, 'f1-score': 0.6849315068493151, 'support': 40} | {'precision': 0.8888888888888888, 'recall': 0.8333333333333334, 'f1-score': 0.8602150537634408, 'support': 48} | {'precision': 0.84, 'recall': 0.7636363636363637, 'f1-score': 0.8000000000000002, 'support': 55} | 0.8084 | {'precision': 0.7496244776818297, 'recall': 0.753403974906029, 'f1-score': 0.7491872342320336, 'support': 522} | {'precision': 0.805637888745576, 'recall': 0.8084291187739464, 'f1-score': 0.8046217646882747, 'support': 522} |
|
88 |
|
89 |
|
90 |
### Framework versions
|
91 |
|
92 |
+
- Transformers 4.33.1
|
93 |
- Pytorch 2.0.1+cu118
|
94 |
+
- Datasets 2.14.5
|
95 |
- Tokenizers 0.13.3
|