Edit model card

arabert_cross_development_task1_fold5

This model is a fine-tuned version of aubmindlab/bert-base-arabertv02 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3225
  • Qwk: 0.7089
  • Mse: 0.3218

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10

Training results

Training Loss Epoch Step Validation Loss Qwk Mse
No log 0.1333 2 1.6054 0.1118 1.6042
No log 0.2667 4 0.7796 0.3420 0.7792
No log 0.4 6 0.8606 0.4875 0.8600
No log 0.5333 8 0.7185 0.6145 0.7176
No log 0.6667 10 0.4960 0.5784 0.4951
No log 0.8 12 0.4504 0.5814 0.4497
No log 0.9333 14 0.4103 0.6104 0.4096
No log 1.0667 16 0.3725 0.6808 0.3715
No log 1.2 18 0.4101 0.8013 0.4091
No log 1.3333 20 0.3292 0.7235 0.3286
No log 1.4667 22 0.3212 0.6809 0.3206
No log 1.6 24 0.3406 0.7512 0.3398
No log 1.7333 26 0.3852 0.7534 0.3842
No log 1.8667 28 0.3920 0.7341 0.3909
No log 2.0 30 0.4486 0.7835 0.4475
No log 2.1333 32 0.4015 0.7929 0.4005
No log 2.2667 34 0.2966 0.7228 0.2959
No log 2.4 36 0.3163 0.6675 0.3156
No log 2.5333 38 0.2965 0.7270 0.2958
No log 2.6667 40 0.3334 0.7868 0.3325
No log 2.8 42 0.3892 0.7967 0.3882
No log 2.9333 44 0.3635 0.7640 0.3626
No log 3.0667 46 0.3415 0.7020 0.3408
No log 3.2 48 0.3452 0.6985 0.3445
No log 3.3333 50 0.3467 0.7485 0.3460
No log 3.4667 52 0.3606 0.7778 0.3598
No log 3.6 54 0.3419 0.7735 0.3412
No log 3.7333 56 0.3217 0.7477 0.3210
No log 3.8667 58 0.3254 0.6951 0.3248
No log 4.0 60 0.3366 0.6811 0.3360
No log 4.1333 62 0.3255 0.7328 0.3248
No log 4.2667 64 0.3255 0.7574 0.3248
No log 4.4 66 0.3264 0.7713 0.3257
No log 4.5333 68 0.3260 0.7538 0.3253
No log 4.6667 70 0.3303 0.7599 0.3295
No log 4.8 72 0.3278 0.7285 0.3270
No log 4.9333 74 0.3399 0.7039 0.3391
No log 5.0667 76 0.3696 0.6751 0.3689
No log 5.2 78 0.3565 0.6740 0.3558
No log 5.3333 80 0.3177 0.7247 0.3171
No log 5.4667 82 0.3107 0.7637 0.3100
No log 5.6 84 0.3037 0.7643 0.3031
No log 5.7333 86 0.2968 0.7380 0.2962
No log 5.8667 88 0.3026 0.6895 0.3020
No log 6.0 90 0.2948 0.7283 0.2942
No log 6.1333 92 0.2968 0.7351 0.2962
No log 6.2667 94 0.3054 0.6898 0.3048
No log 6.4 96 0.3335 0.6564 0.3329
No log 6.5333 98 0.3257 0.6723 0.3250
No log 6.6667 100 0.3148 0.7398 0.3141
No log 6.8 102 0.3244 0.7519 0.3237
No log 6.9333 104 0.3201 0.7549 0.3194
No log 7.0667 106 0.3197 0.7204 0.3190
No log 7.2 108 0.3241 0.7042 0.3234
No log 7.3333 110 0.3257 0.7232 0.3250
No log 7.4667 112 0.3300 0.7399 0.3293
No log 7.6 114 0.3300 0.7379 0.3292
No log 7.7333 116 0.3299 0.7286 0.3292
No log 7.8667 118 0.3273 0.7096 0.3266
No log 8.0 120 0.3291 0.6958 0.3284
No log 8.1333 122 0.3265 0.6826 0.3258
No log 8.2667 124 0.3190 0.7007 0.3182
No log 8.4 126 0.3131 0.7151 0.3123
No log 8.5333 128 0.3135 0.7284 0.3127
No log 8.6667 130 0.3156 0.7284 0.3148
No log 8.8 132 0.3172 0.7253 0.3164
No log 8.9333 134 0.3198 0.7157 0.3190
No log 9.0667 136 0.3222 0.7034 0.3214
No log 9.2 138 0.3213 0.7089 0.3205
No log 9.3333 140 0.3194 0.7170 0.3186
No log 9.4667 142 0.3189 0.7241 0.3181
No log 9.6 144 0.3199 0.7239 0.3191
No log 9.7333 146 0.3215 0.7103 0.3207
No log 9.8667 148 0.3223 0.7089 0.3215
No log 10.0 150 0.3225 0.7089 0.3218

Framework versions

  • Transformers 4.44.0
  • Pytorch 2.4.0
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
8
Safetensors
Model size
135M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for salbatarni/arabert_cross_development_task1_fold5

Finetuned
(296)
this model