Esmail275's picture
Training Complete
752e02f verified
|
raw
history blame
No virus
2.56 kB
metadata
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
  - generated_from_trainer
datasets:
  - hate_speech18
metrics:
  - accuracy
  - f1
  - recall
  - precision
model-index:
  - name: distilbert-base-uncased-finetuned_on_hata_dateset
    results:
      - task:
          name: Text Classification
          type: text-classification
        dataset:
          name: hate_speech18
          type: hate_speech18
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9178338001867413
          - name: F1
            type: f1
            value: 0.9154943774479662
          - name: Recall
            type: recall
            value: 0.9178338001867413
          - name: Precision
            type: precision
            value: 0.9137800286953446

distilbert-base-uncased-finetuned_on_hata_dateset

This model is a fine-tuned version of distilbert/distilbert-base-uncased on the hate_speech18 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0451
  • Accuracy: 0.9178
  • F1: 0.9155
  • Recall: 0.9178
  • Precision: 0.9138

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Recall Precision
0.3342 1.0 268 0.3774 0.8497 0.8702 0.8497 0.9131
0.2411 2.0 536 0.4330 0.9020 0.9097 0.9020 0.9237
0.1374 3.0 804 0.5690 0.8964 0.9050 0.8964 0.9206
0.0804 4.0 1072 1.0798 0.9188 0.9140 0.9188 0.9117
0.0428 5.0 1340 1.0451 0.9178 0.9155 0.9178 0.9138

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1