MoreSexistBERT-edos / README.md
librarian-bot's picture
Librarian Bot: Add base_model information to model
20ab749
|
raw
history blame
4.36 kB
metadata
tags:
  - generated_from_trainer
metrics:
  - f1
  - accuracy
base_model: clincolnoz/MoreSexistBERT
model-index:
  - name: final-lr2e-5-bs16-fp16-2
    results: []

final-lr2e-5-bs16-fp16-2

This model is a fine-tuned version of clincolnoz/MoreSexistBERT on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3337
  • F1 Macro: 0.8461
  • F1 Weighted: 0.8868
  • F1: 0.7671
  • Accuracy: 0.8868
  • Confusion Matrix: [[2801 229] [ 224 746]]
  • Confusion Matrix Norm: [[0.92442244 0.07557756] [0.23092784 0.76907216]]
  • Classification Report: precision recall f1-score support 0 0.925950 0.924422 0.925186 3030.00000

1 0.765128 0.769072 0.767095 970.00000 accuracy 0.886750 0.886750 0.886750 0.88675 macro avg 0.845539 0.846747 0.846140 4000.00000 weighted avg 0.886951 0.886750 0.886849 4000.00000

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 12345
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss F1 Macro F1 Weighted F1 Accuracy Confusion Matrix Confusion Matrix Norm Classification Report
0.3196 1.0 1000 0.2973 0.8423 0.8871 0.7554 0.8902 [[2883 147]
[ 292 678]] [[0.95148515 0.04851485]
[0.30103093 0.69896907]] precision recall f1-score support
0 0.908031 0.951485 0.929251 3030.00000
1 0.821818 0.698969 0.755432 970.00000
accuracy 0.890250 0.890250 0.890250 0.89025
macro avg 0.864925 0.825227 0.842341 4000.00000
weighted avg 0.887125 0.890250 0.887100 4000.00000
0.2447 2.0 2000 0.3277 0.8447 0.8872 0.7623 0.8885 [[2839 191]
[ 255 715]] [[0.9369637 0.0630363]
[0.2628866 0.7371134]] precision recall f1-score support
0 0.917582 0.936964 0.927172 3030.0000
1 0.789183 0.737113 0.762260 970.0000
accuracy 0.888500 0.888500 0.888500 0.8885
macro avg 0.853383 0.837039 0.844716 4000.0000
weighted avg 0.886446 0.888500 0.887181 4000.0000
0.2037 3.0 3000 0.3337 0.8461 0.8868 0.7671 0.8868 [[2801 229]
[ 224 746]] [[0.92442244 0.07557756]
[0.23092784 0.76907216]] precision recall f1-score support
0 0.925950 0.924422 0.925186 3030.00000
1 0.765128 0.769072 0.767095 970.00000
accuracy 0.886750 0.886750 0.886750 0.88675
macro avg 0.845539 0.846747 0.846140 4000.00000
weighted avg 0.886951 0.886750 0.886849 4000.00000

Framework versions

  • Transformers 4.27.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.9.0
  • Tokenizers 0.13.2