asahi417 commited on
Commit
4bbfd02
1 Parent(s): bc97b32

model update

Browse files
Files changed (3) hide show
  1. README.md +88 -0
  2. best_run_hyperparameters.json +1 -0
  3. metric.json +1 -0
README.md ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - cardiffnlp/tweet_sentiment_multilingual
4
+ metrics:
5
+ - f1
6
+ - accuracy
7
+ model-index:
8
+ - name: cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual
9
+ results:
10
+ - task:
11
+ type: text-classification
12
+ name: Text Classification
13
+ dataset:
14
+ name: cardiffnlp/tweet_sentiment_multilingual
15
+ type: all
16
+ split: test
17
+ metrics:
18
+ - name: Micro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
19
+ type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
20
+ value: 0.6931034482758621
21
+ - name: Macro F1 (cardiffnlp/tweet_sentiment_multilingual/all)
22
+ type: micro_f1_cardiffnlp/tweet_sentiment_multilingual/all
23
+ value: 0.692628774202147
24
+ - name: Accuracy (cardiffnlp/tweet_sentiment_multilingual/all)
25
+ type: accuracy_cardiffnlp/tweet_sentiment_multilingual/all
26
+ value: 0.6931034482758621
27
+ pipeline_tag: text-classification
28
+ widget:
29
+ - text: Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}
30
+ example_title: "topic_classification 1"
31
+ - text: Yes, including Medicare and social security saving👍
32
+ example_title: "sentiment 1"
33
+ - text: All two of them taste like ass.
34
+ example_title: "offensive 1"
35
+ - text: If you wanna look like a badass, have drama on social media
36
+ example_title: "irony 1"
37
+ - text: Whoever just unfollowed me you a bitch
38
+ example_title: "hate 1"
39
+ - text: I love swimming for the same reason I love meditating...the feeling of weightlessness.
40
+ example_title: "emotion 1"
41
+ - text: Beautiful sunset last night from the pontoon @TupperLakeNY
42
+ example_title: "emoji 1"
43
+ ---
44
+ # cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual
45
+
46
+ This model is a fine-tuned version of [cardiffnlp/twitter-xlm-roberta-base](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base) on the
47
+ [`cardiffnlp/tweet_sentiment_multilingual (all)`](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)
48
+ via [`tweetnlp`](https://github.com/cardiffnlp/tweetnlp).
49
+ Training split is `train` and parameters have been tuned on the validation split `validation`.
50
+
51
+ Following metrics are achieved on the test split `test` ([link](https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual/raw/main/metric.json)).
52
+
53
+ - F1 (micro): 0.6931034482758621
54
+ - F1 (macro): 0.692628774202147
55
+ - Accuracy: 0.6931034482758621
56
+
57
+ ### Usage
58
+ Install tweetnlp via pip.
59
+ ```shell
60
+ pip install tweetnlp
61
+ ```
62
+ Load the model in python.
63
+ ```python
64
+ import tweetnlp
65
+ model = tweetnlp.Classifier("cardiffnlp/twitter-xlm-roberta-base-sentiment-multilingual", max_length=128)
66
+ model.predict('Get the all-analog Classic Vinyl Edition of "Takin Off" Album from {@herbiehancock@} via {@bluenoterecords@} link below {{URL}}')
67
+ ```
68
+
69
+ ### Reference
70
+
71
+ ```
72
+ @inproceedings{dimosthenis-etal-2022-twitter,
73
+ title = "{T}witter {T}opic {C}lassification",
74
+ author = "Antypas, Dimosthenis and
75
+ Ushio, Asahi and
76
+ Camacho-Collados, Jose and
77
+ Neves, Leonardo and
78
+ Silva, Vitor and
79
+ Barbieri, Francesco",
80
+ booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
81
+ month = oct,
82
+ year = "2022",
83
+ address = "Gyeongju, Republic of Korea",
84
+ publisher = "International Committee on Computational Linguistics"
85
+ }
86
+ ```
87
+
88
+
best_run_hyperparameters.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"learning_rate": 5.61151641533451e-06, "num_train_epochs": 5, "per_device_train_batch_size": 32}
metric.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eval_loss": 0.8723608255386353, "eval_f1": 0.6931034482758621, "eval_f1_macro": 0.692628774202147, "eval_accuracy": 0.6931034482758621, "eval_runtime": 18.2192, "eval_samples_per_second": 382.016, "eval_steps_per_second": 47.752}