Edit model card

arabert_cross_development_task1_fold2

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8675
  • Qwk: 0.0354
  • Mse: 0.8675

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1176 2 3.8664 -0.0149 3.8664
No log 0.2353 4 1.4090 0.0140 1.4090
No log 0.3529 6 0.8889 0.0893 0.8889
No log 0.4706 8 0.9277 -0.1096 0.9277
No log 0.5882 10 0.9033 -0.0849 0.9033
No log 0.7059 12 0.8647 -0.0289 0.8647
No log 0.8235 14 0.8524 -0.0029 0.8524
No log 0.9412 16 0.8579 0.0335 0.8579
No log 1.0588 18 0.8736 0.1138 0.8736
No log 1.1765 20 0.8546 0.1138 0.8546
No log 1.2941 22 0.8346 0.0370 0.8346
No log 1.4118 24 0.8410 -0.0132 0.8410
No log 1.5294 26 0.8430 0.0871 0.8430
No log 1.6471 28 0.8280 0.0054 0.8280
No log 1.7647 30 0.8019 0.1435 0.8019
No log 1.8824 32 0.8572 0.0264 0.8572
No log 2.0 34 0.8264 0.0249 0.8264
No log 2.1176 36 0.8491 0.0318 0.8491
No log 2.2353 38 0.9465 0.0 0.9465
No log 2.3529 40 0.8744 0.0076 0.8744
No log 2.4706 42 0.8837 0.0105 0.8837
No log 2.5882 44 0.9227 -0.0155 0.9227
No log 2.7059 46 0.8707 -0.0166 0.8707
No log 2.8235 48 0.9522 -0.0246 0.9522
No log 2.9412 50 1.0279 -0.0279 1.0279
No log 3.0588 52 0.9857 0.0028 0.9857
No log 3.1765 54 0.8984 0.0071 0.8984
No log 3.2941 56 0.8708 0.0356 0.8708
No log 3.4118 58 0.8691 -0.0219 0.8691
No log 3.5294 60 0.8666 0.0030 0.8666
No log 3.6471 62 0.8682 -0.0216 0.8682
No log 3.7647 64 0.8560 0.0005 0.8560
No log 3.8824 66 0.8587 0.0154 0.8587
No log 4.0 68 0.8679 0.0575 0.8679
No log 4.1176 70 0.8602 0.0363 0.8602
No log 4.2353 72 0.8571 -0.0005 0.8571
No log 4.3529 74 0.8496 0.0170 0.8496
No log 4.4706 76 0.8550 0.0797 0.8550
No log 4.5882 78 0.8587 0.0797 0.8587
No log 4.7059 80 0.8682 0.0574 0.8682
No log 4.8235 82 0.8811 0.0577 0.8811
No log 4.9412 84 0.8867 0.0577 0.8867
No log 5.0588 86 0.8858 0.0576 0.8858
No log 5.1765 88 0.9409 0.0065 0.9409
No log 5.2941 90 0.9595 0.0065 0.9595
No log 5.4118 92 0.9100 0.0576 0.9100
No log 5.5294 94 0.9069 0.0376 0.9069
No log 5.6471 96 0.9321 -0.0238 0.9321
No log 5.7647 98 0.9246 -0.0087 0.9246
No log 5.8824 100 0.9102 0.0061 0.9102
No log 6.0 102 0.9079 -0.0091 0.9079
No log 6.1176 104 0.8805 0.0555 0.8805
No log 6.2353 106 0.8765 0.0356 0.8765
No log 6.3529 108 0.8826 0.0571 0.8826
No log 6.4706 110 0.8859 0.0572 0.8859
No log 6.5882 112 0.8716 0.0360 0.8716
No log 6.7059 114 0.8677 0.0363 0.8677
No log 6.8235 116 0.8685 0.0565 0.8685
No log 6.9412 118 0.8800 0.0351 0.8800
No log 7.0588 120 0.9136 -0.0161 0.9136
No log 7.1765 122 0.9336 0.0076 0.9336
No log 7.2941 124 0.9134 0.0336 0.9134
No log 7.4118 126 0.8790 0.0352 0.8790
No log 7.5294 128 0.8678 0.0560 0.8678
No log 7.6471 130 0.8728 0.0731 0.8728
No log 7.7647 132 0.8699 0.0363 0.8699
No log 7.8824 134 0.8801 0.0355 0.8801
No log 8.0 136 0.9267 0.0082 0.9267
No log 8.1176 138 0.9701 0.0048 0.9701
No log 8.2353 140 1.0044 0.0041 1.0044
No log 8.3529 142 0.9968 0.0041 0.9968
No log 8.4706 144 0.9462 -0.0196 0.9462
No log 8.5882 146 0.8966 0.0119 0.8966
No log 8.7059 148 0.8623 0.0354 0.8623
No log 8.8235 150 0.8572 0.1338 0.8572
No log 8.9412 152 0.8595 0.0913 0.8595
No log 9.0588 154 0.8634 0.0886 0.8634
No log 9.1765 156 0.8631 0.0886 0.8631
No log 9.2941 158 0.8590 0.1258 0.8590
No log 9.4118 160 0.8561 0.1130 0.8561
No log 9.5294 162 0.8560 0.1347 0.8560
No log 9.6471 164 0.8584 0.0980 0.8584
No log 9.7647 166 0.8627 0.0985 0.8627
No log 9.8824 168 0.8662 0.0354 0.8662
No log 10.0 170 0.8675 0.0354 0.8675

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_development_task1_fold2

Finetuned
(296)
this model