Edit model card

final-lr2e-5-bs16-fp16-2

This model is a fine-tuned version of clincolnoz/LessSexistBERT on an https://github.com/rewire-online/edos dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3458
  • F1 Macro: 0.8374
  • F1 Weighted: 0.8806
  • F1: 0.7535
  • Accuracy: 0.8808
  • Confusion Matrix: [[2794 236] [ 241 729]]
  • Confusion Matrix Norm: [[0.92211221 0.07788779] [0.24845361 0.75154639]]
  • Classification Report: precision recall f1-score support 0 0.920593 0.922112 0.921352 3030.00000

1 0.755440 0.751546 0.753488 970.00000 accuracy 0.880750 0.880750 0.880750 0.88075 macro avg 0.838017 0.836829 0.837420 4000.00000 weighted avg 0.880544 0.880750 0.880645 4000.00000

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 12345
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss F1 Macro F1 Weighted F1 Accuracy Confusion Matrix Confusion Matrix Norm Classification Report
0.3253 1.0 1000 0.3011 0.8256 0.8748 0.7301 0.878 [[2852 178]
[ 310 660]] [[0.94125413 0.05874587]
[0.31958763 0.68041237]] precision recall f1-score support
0 0.901961 0.941254 0.921189 3030.000
1 0.787589 0.680412 0.730088 970.000
accuracy 0.878000 0.878000 0.878000 0.878
macro avg 0.844775 0.810833 0.825639 4000.000
weighted avg 0.874226 0.878000 0.874847 4000.000
0.2439 2.0 2000 0.3122 0.8411 0.8848 0.7562 0.8865 [[2842 188]
[ 266 704]] [[0.9379538 0.0620462]
[0.2742268 0.7257732]] precision recall f1-score support
0 0.914414 0.937954 0.926035 3030.0000
1 0.789238 0.725773 0.756176 970.0000
accuracy 0.886500 0.886500 0.886500 0.8865
macro avg 0.851826 0.831863 0.841105 4000.0000
weighted avg 0.884059 0.886500 0.884844 4000.0000
0.1962 3.0 3000 0.3458 0.8374 0.8806 0.7535 0.8808 [[2794 236]
[ 241 729]] [[0.92211221 0.07788779]
[0.24845361 0.75154639]] precision recall f1-score support
0 0.920593 0.922112 0.921352 3030.00000
1 0.755440 0.751546 0.753488 970.00000
accuracy 0.880750 0.880750 0.880750 0.88075
macro avg 0.838017 0.836829 0.837420 4000.00000
weighted avg 0.880544 0.880750 0.880645 4000.00000

Framework versions

  • Transformers 4.27.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.9.0
  • Tokenizers 0.13.2
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for clincolnoz/LessSexistBERT-edos

Finetuned
(2)
this model