Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

language:

  • hi
  • en datasets:
  • utkarsharora100/google_go_emotions_hindi_translated metrics:
  • accuracy: 0.8085
  • precision: 0.7996
  • recall: 0.8085
  • f1: 0.7983
  • confusion_matrix: anger: [2, 0, 0, 1, 2, 0] disgust: [0, 1, 0, 0, 0, 0] joy: [0, 0, 6, 0, 0, 0] surprise: [0, 0, 0, 3, 1, 0] neutral: [1, 0, 1, 0, 23, 1] sadness: [0, 0, 0, 0, 2, 3]
  • classification_report: anger: precision: 0.67 recall: 0.40 f1-score: 0.50 support: 5 disgust: precision: 1.00 recall: 1.00 f1-score: 1.00 support: 1 joy: precision: 0.86 recall: 1.00 f1-score: 0.92 support: 6 surprise: precision: 0.82 recall: 0.88 f1-score: 0.85 support: 26 neutral: precision: 0.75 recall: 0.60 f1-score: 0.67 support: 5 sadness: precision: 0.75 recall: 0.75 f1-score: 0.75 support: 4 weighted_avg: precision: 0.80 recall: 0.81 f1-score: 0.80 support: 47

model_name: "vashuag/HindiEmotion" base_model: "ai4bharat/indic-bert" pipeline_tag: "text-classification" tags:

  • emotion-detection
  • hindi
  • huggingface
  • text-classification

training:

  • epochs: 10
  • batch_size: 16
  • learning_rate: 1e-5

resources:

summary: | The model achieved its best performance on Epoch 5, with an accuracy of 0.6997, F1 score of 0.6750, precision of 0.6761, recall of 0.6997, and ROC AUC of 0.8207. The model shows stable performance across later epochs, with slight fluctuations in metrics but generally consistent results.

usage: |

from transformers import pipeline

# Load the model pipeline
emotion_model = pipeline("text-classification", model="vashuag/HindiEmotion", return_all_scores=True)

# Example prediction
text = "आप बहुत अच्छे हैं"  # Translation: "You are very good."
predictions = emotion_model(text)
print(predictions)
Downloads last month
176
Safetensors
Model size
33.4M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .