uvegesistvan commited on
Commit
af436d9
1 Parent(s): 6f5506d

model card created

Browse files
Files changed (1) hide show
  1. README.md +93 -3
README.md CHANGED
@@ -1,3 +1,93 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - hu
4
+ - sk
5
+ - pl
6
+ - cs
7
+ tags:
8
+ - emotion-classification
9
+ - roberta
10
+ - fine-tuned
11
+ - multilingual
12
+
13
+ license: mit
14
+ datasets:
15
+ - custom
16
+
17
+ model-index:
18
+ - name: Multilingual Fine-tuned RoBERTa for Emotion Classification
19
+ results:
20
+ - task:
21
+ type: text-classification
22
+ name: Multilingual Emotion Classification
23
+ dataset:
24
+ name: Multilingual Custom Dataset (Hungarian, Slovak, Polish, Czech)
25
+ type: text
26
+ metrics:
27
+ - name: Precision (Macro Avg)
28
+ type: precision
29
+ value: 0.86
30
+ - name: Recall (Macro Avg)
31
+ type: recall
32
+ value: 0.86
33
+ - name: F1 Score (Macro Avg)
34
+ type: f1
35
+ value: 0.86
36
+ - name: Accuracy
37
+ type: accuracy
38
+ value: 0.84
39
+
40
+ ---
41
+
42
+ # Multilingual Fine-tuned RoBERTa Model for Emotion Classification
43
+
44
+ ## Model Description
45
+ This model is a multilingual fine-tuned version of the [RoBERTa](https://huggingface.co/roberta-base) model, specifically tailored for emotion classification tasks in Hungarian, Slovak, Polish, and Czech languages.
46
+ The model was trained to classify textual data into six emotional categories (**anger, fear, disgust, sadness, joy,** and **none of them**).
47
+
48
+ ## Intended Use
49
+ This model is intended for classifying textual data into emotional categories across multiple languages, including Hungarian, Slovak, Polish, and Czech.
50
+ It can be used in applications such as sentiment analysis, social media monitoring, customer feedback analysis, and similar tasks.
51
+ The model predicts the dominant emotion in a given text among the six predefined categories.
52
+
53
+ ## Metrics
54
+
55
+ | **Class** | **Precision (P)** | **Recall (R)** | **F1-Score (F1)** |
56
+ |-----------------|-------------------|----------------|-------------------|
57
+ | **anger** | 0.74 | 0.81 | 0.77 |
58
+ | **fear** | 0.98 | 0.98 | 0.98 |
59
+ | **disgust** | 0.94 | 0.95 | 0.95 |
60
+ | **sadness** | 0.87 | 0.87 | 0.87 |
61
+ | **joy** | 0.89 | 0.89 | 0.89 |
62
+ | **none of them**| 0.77 | 0.69 | 0.73 |
63
+ | **Accuracy** | | | **0.84** |
64
+ | **Macro Avg** | 0.86 | 0.86 | 0.86 |
65
+ | **Weighted Avg**| 0.84 | 0.84 | 0.84 |
66
+
67
+ ### Overall Performance
68
+ - **Accuracy:** 0.84
69
+ - **Macro Average Precision:** 0.86
70
+ - **Macro Average Recall:** 0.86
71
+ - **Macro Average F1-Score:** 0.86
72
+
73
+ ### Class-wise Performance
74
+ The model demonstrates strong performance across different emotional categories, with particularly high precision, recall, and F1 scores in the **fear**, **disgust**, and **joy** categories.
75
+ The model performs moderately well in detecting **anger** and **none of them** categories, but still achieves adequate accuracy in these cases.
76
+
77
+ ## Limitations
78
+ - **Context Sensitivity:** The model may struggle with recognizing emotions that require deeper contextual understanding.
79
+ - **Class Imbalance:** The model's performance on the "none of them" category suggests that further training with more balanced datasets could improve accuracy.
80
+ - **Generalization:** The model's performance may vary depending on the text's domain, language style, and length, especially across different languages.
81
+
82
+ ## How to Use
83
+ You can use this model directly with the `transformers` library from Hugging Face. Below is an example of how to load and use the model:
84
+
85
+ ```python
86
+ from transformers import pipeline
87
+
88
+ # Load the fine-tuned model
89
+ classifier = pipeline("text-classification", model="visegradmedia-emotion/Emotion_RoBERTa_pooled_V4")
90
+
91
+ # Example usage
92
+ result = classifier("Nagyon örömtelinek érzem magam ma!")
93
+ print(result)