metadata
license: other
base_model: nvidia/mit-b5
tags:
- generated_from_trainer
model-index:
- name: FINAL_ecc_segformer
results: []
FINAL_ecc_segformer
This model is a fine-tuned version of nvidia/mit-b5 on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.0749
- Mean Iou: 0.1968
- Mean Accuracy: 0.3939
- Overall Accuracy: 0.3939
- Accuracy Background: nan
- Accuracy Crack: 0.3939
- Iou Background: 0.0
- Iou Crack: 0.3936
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- training_steps: 10000
Training results
Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Background | Accuracy Crack | Iou Background | Iou Crack |
---|---|---|---|---|---|---|---|---|---|---|
0.0534 | 1.0 | 548 | 0.0614 | 0.1368 | 0.2750 | 0.2750 | nan | 0.2750 | 0.0 | 0.2736 |
0.058 | 2.0 | 1096 | 0.1018 | 0.2093 | 0.4238 | 0.4238 | nan | 0.4238 | 0.0 | 0.4186 |
0.0482 | 3.0 | 1644 | 0.0508 | 0.1791 | 0.4315 | 0.4315 | nan | 0.4315 | 0.0 | 0.3582 |
0.0338 | 4.0 | 2192 | 0.0569 | 0.1849 | 0.3716 | 0.3716 | nan | 0.3716 | 0.0 | 0.3698 |
0.0395 | 5.0 | 2740 | 0.0597 | 0.1745 | 0.3506 | 0.3506 | nan | 0.3506 | 0.0 | 0.3490 |
0.0372 | 6.0 | 3288 | 0.0509 | 0.2298 | 0.4635 | 0.4635 | nan | 0.4635 | 0.0 | 0.4597 |
0.0402 | 7.0 | 3836 | 0.0620 | 0.1751 | 0.3507 | 0.3507 | nan | 0.3507 | 0.0 | 0.3503 |
0.038 | 8.0 | 4384 | 0.0681 | 0.1905 | 0.3815 | 0.3815 | nan | 0.3815 | 0.0 | 0.3810 |
0.0393 | 9.0 | 4932 | 0.0685 | 0.2213 | 0.4433 | 0.4433 | nan | 0.4433 | 0.0 | 0.4425 |
0.0376 | 10.0 | 5480 | 0.0590 | 0.1962 | 0.3929 | 0.3929 | nan | 0.3929 | 0.0 | 0.3924 |
0.0381 | 11.0 | 6028 | 0.0626 | 0.1891 | 0.3801 | 0.3801 | nan | 0.3801 | 0.0 | 0.3783 |
0.034 | 12.0 | 6576 | 0.0623 | 0.2061 | 0.4162 | 0.4162 | nan | 0.4162 | 0.0 | 0.4122 |
0.0301 | 13.0 | 7124 | 0.0831 | 0.1832 | 0.3669 | 0.3669 | nan | 0.3669 | 0.0 | 0.3664 |
0.034 | 14.0 | 7672 | 0.0636 | 0.2059 | 0.4119 | 0.4119 | nan | 0.4119 | 0.0 | 0.4118 |
0.0303 | 15.0 | 8220 | 0.0705 | 0.1931 | 0.3864 | 0.3864 | nan | 0.3864 | 0.0 | 0.3862 |
0.0338 | 16.0 | 8768 | 0.0685 | 0.2101 | 0.4206 | 0.4206 | nan | 0.4206 | 0.0 | 0.4202 |
0.0229 | 17.0 | 9316 | 0.0706 | 0.2099 | 0.4204 | 0.4204 | nan | 0.4204 | 0.0 | 0.4197 |
0.0337 | 18.0 | 9864 | 0.0742 | 0.1982 | 0.3968 | 0.3968 | nan | 0.3968 | 0.0 | 0.3965 |
0.0257 | 18.25 | 10000 | 0.0749 | 0.1968 | 0.3939 | 0.3939 | nan | 0.3939 | 0.0 | 0.3936 |
Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cpu
- Datasets 2.14.6
- Tokenizers 0.14.1