Edit model card

segformer-b0-scene-parse-150

This model is a fine-tuned version of nvidia/mit-b0 on the scene_parse_150 dataset. It achieves the following results on the evaluation set:

  • Loss: 4.7353
  • Mean Iou: 0.0111
  • Mean Accuracy: 0.0697
  • Overall Accuracy: 0.2528
  • Per Category Iou: [0.017874398009988864, 0.05282654787145342, 0.6358665398023602, 0.11651097689775745, 0.2861381543323793, 0.013614930459246345, 0.0, 0.000756546442687747, 0.0, 0.03785590778097983, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0004929751047572097, 0.0, 0.14081967337580004, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.007816691740397463, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0]
  • Per Category Accuracy: [0.018518965253821597, 0.07998052334493516, 0.8444809535877515, 0.25298488770142774, 0.35968689660920417, 0.019071300911381726, 0.0, 0.0007569496474004959, 0.0, 0.04806566437169219, nan, nan, 0.0, nan, nan, 0.0, 0.0004929924642580463, 0.0, 0.8067638103523271, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.009736536911696294, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan]

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss Mean Iou Mean Accuracy Overall Accuracy Per Category Iou Per Category Accuracy
4.9596 1.0 20 4.9061 0.0079 0.0491 0.2048 [0.008141550600753182, 0.023334901539081927, 0.6072442486539403, 0.07246742753257247, 0.1463094452851175, 0.0037985268476675087, 0.0, 0.0002857871117736566, 0.0, 0.014472586767434339, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.1081675562024907, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.00879925321804068, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0] [0.0082141220177068, 0.03130691962272998, 0.7589027448855642, 0.14377556984219755, 0.15714206337079276, 0.004740073043748543, 0.0, 0.0002857871117736566, 0.0, 0.01991124537492389, nan, nan, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.5818181818181818, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.01173495128455455, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan]
4.8127 2.0 40 4.7353 0.0111 0.0697 0.2528 [0.017874398009988864, 0.05282654787145342, 0.6358665398023602, 0.11651097689775745, 0.2861381543323793, 0.013614930459246345, 0.0, 0.000756546442687747, 0.0, 0.03785590778097983, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0004929751047572097, 0.0, 0.14081967337580004, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.007816691740397463, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, nan, 0.0, nan, nan, nan, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0, nan, nan, 0.0, 0.0, nan, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, nan, 0.0, 0.0, 0.0, nan, 0.0, nan, 0.0] [0.018518965253821597, 0.07998052334493516, 0.8444809535877515, 0.25298488770142774, 0.35968689660920417, 0.019071300911381726, 0.0, 0.0007569496474004959, 0.0, 0.04806566437169219, nan, nan, 0.0, nan, nan, 0.0, 0.0004929924642580463, 0.0, 0.8067638103523271, 0.0, 0.0, 0.0, 0.0, nan, nan, nan, nan, nan, nan, 0.009736536911696294, nan, nan, 0.0, 0.0, nan, nan, 0.0, nan, 0.0, 0.0, 0.0, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.0, nan, 0.0, nan, 0.0, nan, 0.0, nan, nan, nan, nan, nan, nan, nan, nan, nan]

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
3.75M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Electrotubbie/segformer-b0-scene-parse-150

Base model

nvidia/mit-b0
Finetuned
(314)
this model

Dataset used to train Electrotubbie/segformer-b0-scene-parse-150