Edit model card

distilbert-emotion

Reupload [10/02/23] : The model has been retrained using identical hyperparameters, but this time on an even more pristine dataset, free of certain scraping artifacts. Remarkably, it maintains the same level of accuracy and loss while demonstrating superior generalization capabilities.

This model is a fine-tuned version of distilbert-base-uncased on the emotion balanced dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1216
  • Accuracy: 0.9521

Model description

This emotion classifier has been trained on 89_754 examples split into train, validation and test. Each label was perfectly balanced in each split.

Intended uses & limitations

Usage:

from transformers import pipeline

# Create the pipeline
emotion_classifier = pipeline('text-classification', model='AdamCodd/distilbert-base-uncased-finetuned-emotion-balanced')

# Now you can use the pipeline to classify emotions
result = emotion_classifier("We are delighted that you will be coming to visit us. It will be so nice to have you here.")
print(result)
#[{'label': 'joy', 'score': 0.9983291029930115}]

This model faces challenges in accurately categorizing negative sentences, as well as those containing elements of sarcasm or irony. These limitations are largely attributable to DistilBERT's constrained capabilities in semantic understanding. Although the model is generally proficient in emotion detection tasks, it may lack the nuance necessary for interpreting complex emotional nuances.

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 32
  • eval_batch_size: 64
  • seed: 1270
  • optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 150
  • num_epochs: 3
  • weight_decay: 0.01

Training results

          precision    recall  f1-score   support

 sadness     0.9882    0.9485    0.9679      1496
     joy     0.9956    0.9057    0.9485      1496
    love     0.9256    0.9980    0.9604      1496
   anger     0.9628    0.9519    0.9573      1496
    fear     0.9348    0.9098    0.9221      1496
surprise     0.9160    0.9987    0.9555      1496

accuracy                         0.9521      8976
macro avg    0.9538    0.9521    0.9520      8976
weighted avg 0.9538    0.9521    0.9520      8976

test_acc:     0.9520944952964783
test_loss:    0.121663898229599

Framework versions

  • Transformers 4.33.2
  • Pytorch lightning 2.0.9
  • Tokenizers 0.13.3

If you want to support me, you can here.

Downloads last month
30
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AdamCodd/distilbert-base-uncased-finetuned-emotion-balanced

Quantized
(22)
this model

Dataset used to train AdamCodd/distilbert-base-uncased-finetuned-emotion-balanced

Collection including AdamCodd/distilbert-base-uncased-finetuned-emotion-balanced

Evaluation results