Edit model card

minilm-finetuned-emotionclassification

This model is a fine-tuned version of microsoft/MiniLM-L12-H384-uncased on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0554
  • F1 Score: 0.6732

Model description

The base model used is Microsoft MiniLM-L12-H384-uncased which is finetuned on GoEmotions dataset available on huggingface.

With this model, you can classify emotions in English text data. The model predicts 10 basic emotions:

  1. anger 🀬
  2. love ❀️
  3. fear 😨
  4. joy πŸ˜€
  5. excitement πŸ˜„
  6. sadness 😭
  7. surprise 😲
  8. gratitude 😊
  9. curiosity πŸ€” 10 caring

Intended uses & limitations

The model can be used to detect emotions from text/ documents which can be used for analysis contextual emotional analysis of the documents

Training and evaluation data

The dataset used for Training and Evaluation is GoEmotions dataset and in this, we have used 10 emotion variables.

{0:'sadness',1:'joy',2:'love',3:'anger',4:'fear',5:'surprise',6:'excitement',7:'gratitude',8:'curiosity',9:'caring'}

How to use the model

Here is how to use this model to extract the emotions from the given text in PyTorch:

>>> from transformers import pipeline
>>> model_ckpt ="sid321axn/minilm-finetuned-emotionclassification"
>>> pipe = pipeline("text-classification",model=model_ckpt)
>>> pipe("I am really excited about second part of Brahmastra Movie")

[{'label': 'excitement', 'score': 0.7849715352058411}]

Training procedure

The training we have done by following this video on Youtube by huggingface

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss F1 Score
1.1659 1.0 539 1.1419 0.6347
1.0719 2.0 1078 1.0789 0.6589
0.9893 3.0 1617 1.0537 0.6666
0.9296 4.0 2156 1.0366 0.6729
0.8763 5.0 2695 1.0359 0.6774
0.8385 6.0 3234 1.0484 0.6693
0.8085 7.0 3773 1.0478 0.6758
0.7842 8.0 4312 1.0488 0.6741
0.7608 9.0 4851 1.0538 0.6749
0.7438 10.0 5390 1.0554 0.6732

Framework versions

  • Transformers 4.24.0
  • Pytorch 1.12.1+cu113
  • Datasets 2.6.1
  • Tokenizers 0.13.2
Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.