{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:15.944100Z" }, "title": "Nearest neighbour approaches for Emotion Detection in Tweets", "authors": [ { "first": "Olha", "middle": [], "last": "Kaminska", "suffix": "", "affiliation": { "laboratory": "", "institution": "Computer Science and Statistics Ghent University", "location": {} }, "email": "" }, { "first": "Chris", "middle": [], "last": "Cornelis", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Veronique", "middle": [], "last": "Hoste", "suffix": "", "affiliation": { "laboratory": "LT3 Language and Translation Technology Team Ghent University", "institution": "", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Emotion detection is an important task that can be applied to social media data to discover new knowledge. While the use of deep learning methods for this task has been prevalent, they are black-box models, making their decisions hard to interpret for a human operator. Therefore, in this paper, we propose an approach using weighted k Nearest Neighbours (kNN), a simple, easy to implement, and explainable machine learning model. These qualities can help to enhance results' reliability and guide error analysis. In particular, we apply the weighted kNN model to the shared emotion detection task in tweets from SemEval-2018. Tweets are represented using different text embedding methods and emotion lexicon vocabulary scores, and classification is done by an ensemble of weighted kNN models. Our best approaches obtain results competitive with state-of-the-art solutions and open up a promising alternative path to neural network methods.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Emotion detection is an important task that can be applied to social media data to discover new knowledge. While the use of deep learning methods for this task has been prevalent, they are black-box models, making their decisions hard to interpret for a human operator. Therefore, in this paper, we propose an approach using weighted k Nearest Neighbours (kNN), a simple, easy to implement, and explainable machine learning model. These qualities can help to enhance results' reliability and guide error analysis. In particular, we apply the weighted kNN model to the shared emotion detection task in tweets from SemEval-2018. Tweets are represented using different text embedding methods and emotion lexicon vocabulary scores, and classification is done by an ensemble of weighted kNN models. Our best approaches obtain results competitive with state-of-the-art solutions and open up a promising alternative path to neural network methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "In this paper, we consider SemEval-2018 Task 1 EI-oc: Affect in Tweets for English 1 (Mohammad et al., 2018) . This is a classification problem in which data instances are raw tweets, labeled with scores expressing how much each of four considered emotions (anger, sadness, joy, and fear) are present.", "cite_spans": [ { "start": 85, "end": 108, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our target is to implement the weighted k Nearest Neighbor (wkNN) algorithm to detect emotions in tweets. In doing so, we consider different ways of tweet embeddings and combine them with various emotional lexicons, which provide an emotional score for each word.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The motivation for using wkNN is to show the potential of a simple, interpretable machine learning approach compared to black-box techniques based on more complex models like neural networks (NNs). By contrast to the latter, wkNN's predictions for a test sample can be traced back easily to the training samples (the nearest neighbours) that triggered this decision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We note that we still use NN-based methods for obtaining tweet embeddings. One could therefore argue that our method is not fully explainable; however, we feel that it is less important to understand how tweets are initially represented in an n-dimensional space, than to explain how they are used in making predictions for nearby instances.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remainder of this paper is organized as follows: in Section 2 we discuss related work, mainly focusing on the winning approaches of SemEval-2018 Task 1. In Section 3, we describe the methodology behind our solution, including data cleaning, tweet representations through word embeddings, lexicon vocabularies, and their combinations; our proposed ensemble method for classification; and finally, evaluation measures. In Section 4, we report the observed performance on training and development data for the different setups of our proposal, while Section 5 lists the results of the best approach on the test data and compares them to the competition results. In Section 6, we examine some of the test samples with correct and wrong predictions to see how we can use our model's interpretability to explain the obtained results. Finally, in Section 7, we discuss our results and consider possible ways to improve them.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "First, we briefly recall the most successful proposals 2 to the SemEval-2018 task. The winning approach (Duppada et al., 2018) uses tweet embedding vectors in ensembles of XGBoost and Random Forest classification models. The runners-up (Gee and Wang, 2018) perform transfer learning with Long Short Term Memory (LSTM) neural networks. The third-place contestants (Rozental and Fleischer, 2018) train an ensemble of a complex model consisting of Gated-Recurrent-Units (GRU) using a convolution neural network (CNN) as an attention mechanism.", "cite_spans": [ { "start": 104, "end": 126, "text": "(Duppada et al., 2018)", "ref_id": "BIBREF7" }, { "start": 236, "end": 256, "text": "(Gee and Wang, 2018)", "ref_id": "BIBREF9" }, { "start": 363, "end": 393, "text": "(Rozental and Fleischer, 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "It is clear that the leaderboard is dominated by solutions that are neither simple nor interpretable. This comes as no surprise, given that the effectiveness of a solution is evaluated only using the Pearson Correlation Coefficient (see formula (3) in Section 3.5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In general, machine learning models in the Natural Language Processing (NLP) field rarely explain their predicted labels. This inspires the need for explainable models, which concentrate on interpreting outputs and the connection of inputs with outputs. For example, Liu et al. (2019) present an explainable classification approach that solves NLP tasks with comparable accuracy to neural networks and also generates explanations for its solutions.", "cite_spans": [ { "start": 267, "end": 284, "text": "Liu et al. (2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Recently, Danilevsky et al. (2020) presented an overview of explainable methods for NLP tasks. Apart from focusing on explanations of model predictions, they also discuss the most important techniques to generate and visualize explanations. The paper also discusses evaluation techniques to measure the quality of the obtained explanations, which could be useful in future work.", "cite_spans": [ { "start": 10, "end": 34, "text": "Danilevsky et al. (2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In this paper, we consider one of the simplest explainable models: the kNN method. In the context of NLP, kNN has recently been applied by (Fatema Rajani et al., 2020) as a backoff method for classifiers based on BERT and RoBERTa (see Section 3.2). In particular, when the latter NN methods are less confident about their predictions, the kNN solution is used instead. In this paper, we will only use such NN approaches at the data representation level and rely on weighted kNN only during classification.", "cite_spans": [ { "start": 139, "end": 167, "text": "(Fatema Rajani et al., 2020)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "In this section, we describe the different ingredients of our approach, more precisely, data preprocessing, embedding methods, emotional lexicon vocabularies, classification, and evaluation methods.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "We focus on the emotion intensity ordinal classification task (EI-oc) (Mohammad et al., 2018) . Given each of the four considered emotions (anger, fear, joy, sadness), the task is to classify a tweet in English into one of four ordinal classes of emotion intensity (0: no emotion can be inferred, 1: low amount of emotion can be inferred, 2: moderate amount of emotion can be inferred, 3: high amount of emotion can be inferred) which best represents the mental state of the tweeter.", "cite_spans": [ { "start": 70, "end": 93, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Separate training, development, and test datasets were provided for each emotion. To train the classification model, we merge the training and development datasets to evaluate our results with the cross-validation method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methodology", "sec_num": "3" }, { "text": "Before starting the embedding process, we can clean tweets in several ways:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data cleaning", "sec_num": "3.1" }, { "text": "\u2022 General preprocessing. First, we delete account tags (starting with @ ), newline symbols ('\\n'), extra white spaces, all punctuation marks, and numbers. Next, we replace '&' with the word 'and' and replace emojis with textual descriptions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data cleaning", "sec_num": "3.1" }, { "text": "We save hashtags as a potential source of useful information (Mohammad and Kiritchenko, 2015) but delete # symbols.", "cite_spans": [ { "start": 61, "end": 93, "text": "(Mohammad and Kiritchenko, 2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Data cleaning", "sec_num": "3.1" }, { "text": "We do not delete emojis because, following the observations from Wolny (2016), using emoji symbols could significantly improve precision in identifying various types of emotions. In the source data, emojis are present in two ways: combinations of punctuation marks and/or letters and small pictures decoded with Unicode. The first type of emojis is replaced with their descriptions taking from the list of emoticons on Wikipedia 3 . The second type of emojis are transformed using the Python package \"emoji\" 4 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data cleaning", "sec_num": "3.1" }, { "text": "\u2022 Stop-word removal: for this process, the list of stop-words from the NLTK package 5 is used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data cleaning", "sec_num": "3.1" }, { "text": "We do not apply preprocessing or stop-word removal a priori, but rather examine whether they improve the classification during the experimental stage.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data cleaning", "sec_num": "3.1" }, { "text": "To perform classification, each tweet is represented by a vector or set of vectors, using the following word embedding techniques:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tweet embedding", "sec_num": "3.2" }, { "text": "\u2022 Pre-trained Word2Vec from the Gensim package 6 . This model includes 300-dimension word vectors for a vocabulary with 3 million words and phrases trained on a Google News dataset. It is included here because of its popularity in NLP tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tweet embedding", "sec_num": "3.2" }, { "text": "\u2022 DeepMoji 7 is a state-of-the-art sentiment embedding model, pre-trained on millions of tweets with emojis to recognize emotions and sarcasm. We used its implementation on Py-Torch by Huggingface 8 , which provides for each sentence an embedding of size 2304 dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tweet embedding", "sec_num": "3.2" }, { "text": "\u2022 The Universal Sentence Encoder (USE) (Cer et al., 2018) is a sentence-level embedding approach developed by the TensorFlow team 9 . It provides a 512-dimensional vector for a sentence or even a whole paragraph that can be used for different tasks such as text classification, sentence similarity, etc. USE was trained with a deep averaging network (DAN) encoder on several data sources.", "cite_spans": [ { "start": 39, "end": 57, "text": "(Cer et al., 2018)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Tweet embedding", "sec_num": "3.2" }, { "text": "The model is available in two options: trained with a DAN and with a Transformer encoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tweet embedding", "sec_num": "3.2" }, { "text": "After basic experiments, we chose the second one for further experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tweet embedding", "sec_num": "3.2" }, { "text": "\u2022 Bidirectional Encoder Representations from Transformers (BERT) by Devlin et al. (2019).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tweet embedding", "sec_num": "3.2" }, { "text": "5 http://www.nltk.org/nltk_data/ 6 https://radimrehurek.com/gensim/ models/word2vec.html 7 https://deepmoji.mit.edu/ 8 https://github.com/huggingface/ torchMoji 9 https://www.tensorflow.org/hub/ tutorials/semantic_similarity_with_tf_ hub_universal_encoder The used script 10 was developed by The Google AI Language Team and extracted precomputed feature vectors from a PyTorch BERT model. The length of the output vector for a word is 768 features. Words that are not in the BERT vocabulary were split into tokens (for example, the word \"tokens\" will be resented as \"tok\", \"##en\", \"##s\"), and for each token, a vector was created.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tweet embedding", "sec_num": "3.2" }, { "text": "\u2022 Sentence-BERT (SBERT) is a modified and tuned BERT model presented in Reimers and Gurevych (2019) . It uses so-called siamese and triplet network structures, or a \"twin network\", that processes two sentences in the same way simultaneously. SBERT provides embeddings at a sentence level with the same size as the original BERT.", "cite_spans": [ { "start": 72, "end": 99, "text": "Reimers and Gurevych (2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Tweet embedding", "sec_num": "3.2" }, { "text": "\u2022 Twitter-roBERTa-based model for Emotion Recognition, one of the seven fine-tuned roBERTa models presented by Barbieri et al. (2020) . Each described model was trained for a specific task and provided an embedding at the token level similar to BERT. The model that we consider was trained for the emotion detection task (E-c) using a different collection of tweets from the same authors of Se-mEval 2018 Task 1 (Mohammad et al., 2018) , in which the emotions anger, joy, sadness, and optimism are used.", "cite_spans": [ { "start": 111, "end": 133, "text": "Barbieri et al. (2020)", "ref_id": "BIBREF0" }, { "start": 412, "end": 435, "text": "(Mohammad et al., 2018)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Tweet embedding", "sec_num": "3.2" }, { "text": "Sentence-level embeddings are applied to each tweet as a whole, while for word (or token) level embeddings, we represent a tweet vector as the mean of its words' (tokens') vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Tweet embedding", "sec_num": "3.2" }, { "text": "As an additional source of information to complement tweet embeddings, we also consider lexicon scores. Emotional lexicons are vocabularies that provide scores of different emotion intensity for a word. In our experiments, we use the following English lexicons:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotional lexicon vocabularies", "sec_num": "3.3" }, { "text": "\u2022 Valence Arousal Dominance (NRC VAD) lexicon (20,007 words) (Mohammad, 2018a)each word has a score (float number between 0 and 1) for Valence, Arousal, and Dominance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotional lexicon vocabularies", "sec_num": "3.3" }, { "text": "\u2022 Emotional Lexicon (EMOLEX) (14,182 words) lexicon (Mohammad and Turney, 2013) -each word has ten scores (0 or 1), one per emotion: anger, anticipation, disgust, fear, joy, negative, positive, sadness, surprise, and trust.", "cite_spans": [ { "start": 52, "end": 79, "text": "(Mohammad and Turney, 2013)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Emotional lexicon vocabularies", "sec_num": "3.3" }, { "text": "\u2022 Affect Intensity (AI) lexicon (nearly 6,000 terms) (Mohammad, 2018b) -each word has four scores (float number from 0 to 1), one per emotion: anger, fear, sadness, and joy.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotional lexicon vocabularies", "sec_num": "3.3" }, { "text": "\u2022 Affective norms for English words (ANEW) lexicon (1034 words) (Bradley and Lang, 1999) -each word has six scores (float number between 0 and 10): Mean and SD for Valence, Arousal and Dominance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotional lexicon vocabularies", "sec_num": "3.3" }, { "text": "\u2022 Warriner's lexicon (13,915 lemmas) (Warriner et al., 2013) -each word has 63 scores (float number between 0 and 1000 ), reflecting different statistical characteristics of Valence, Arousal, and Dominance.", "cite_spans": [ { "start": 37, "end": 60, "text": "(Warriner et al., 2013)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Emotional lexicon vocabularies", "sec_num": "3.3" }, { "text": "We consider the following two methods of combining word embeddings with lexicon vocabulary scores:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotional lexicon vocabularies", "sec_num": "3.3" }, { "text": "\u2022 For each word during the embedding process, lexicon scores are appended to the end of the tweet vector. The size of the obtained vector is the word embedding size plus the number of lexicon scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotional lexicon vocabularies", "sec_num": "3.3" }, { "text": "\u2022 We construct a separate feature for each lexicon. These models are then combined with the embedding vectors in an ensemble classifier, as described in Section 3.4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotional lexicon vocabularies", "sec_num": "3.3" }, { "text": "We perform experiments for all emotion datasets with one or several lexicons. The results are presented in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotional lexicon vocabularies", "sec_num": "3.3" }, { "text": "In this subsection, the weighted k Nearest Neighbors (wkNN) classification method (Dudani, 1976) and its similarity relation are described. The wkNN is a refinement of the regular kNN, where distances to the neighbors are taken into account as weights. This approach aims to assign more significant weight to the closest instances and a smaller weight to the ones that are further away. The wkNN has two main parameters: the used metric or similarity relation and the number k of considered neighbours.", "cite_spans": [ { "start": 82, "end": 96, "text": "(Dudani, 1976)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "3.4" }, { "text": "To choose an appropriate similarity relation, we follow Huang (2008) , who compared metrics for the document clustering task. The cosine metric was shown to be one of the best:", "cite_spans": [ { "start": 56, "end": 68, "text": "Huang (2008)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "cos(A, B) = A \u2022 B ||A|| \u00d7 ||B|| ,", "eq_num": "(1)" } ], "section": "Classification methods", "sec_num": "3.4" }, { "text": "where A and B denote elements from the same vector space, A \u2022 B is their scalar product, || * ||vector norm. Values provided by this measure are between -1 (perfectly dissimilar vectors) and 1 (perfectly similar vectors). In order to obtain a [0,1]-similarity relation instead of a metric, we use the following formula:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "3.4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "cos similarity(A, B) = 1 + cos(A, B) 2 .", "eq_num": "(2)" } ], "section": "Classification methods", "sec_num": "3.4" }, { "text": "Formula (2) is used as the primary similarity relation throughout this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "3.4" }, { "text": "Regarding the parameter k, there is no one-fitsall rule to determine it. As a general \"rule of thumb\", we can put k =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "3.4" }, { "text": "\u221a N 2 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "3.4" }, { "text": "where N is the number of samples in the dataset. However, to examine the impact of k, we will use various numbers of neighbors for each emotion dataset for the best-performing methods in our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "3.4" }, { "text": "We use wKNN both as a standalone method as well as inside of a classification ensemble. For the latter, a separate model is trained for each information source (vectors containing tweet embeddings, lexicon scores, or their combination).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "3.4" }, { "text": "For each test sample, each model's outputs are combined using the standard average as a voting function, i.e., each model gets the same weight in this vote. The architecture of our approach is illustrated in Fig. 1 .", "cite_spans": [], "ref_spans": [ { "start": 208, "end": 214, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Classification methods", "sec_num": "3.4" }, { "text": "Note that in this way, the predictions will be float values between 0 and 3, rather than integer labels (0, 1, 2 or 3); however, at the training stage, this does not represent a problem. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classification methods", "sec_num": "3.4" }, { "text": "To evaluate the performance of the implemented methods, 5-fold cross-validation is used, using as evaluation measure the Pearson Correlation Coefficient (PCC), as was also done for the competition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation method", "sec_num": "3.5" }, { "text": "Given the vectors of predicted values y and correct values x, the PCC measure provides a value between \u22121 (a total negative linear correlation) and 1 (a total positive linear correlation), where 0 represents no linear correlation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation method", "sec_num": "3.5" }, { "text": "Hence, the best model should provide the highest value of PCC:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation method", "sec_num": "3.5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "P CC = i (x i \u2212x)(y i \u2212\u0233) i (x i \u2212x) 2 i (y i \u2212\u0233) 2 .", "eq_num": "(3)" } ], "section": "Evaluation method", "sec_num": "3.5" }, { "text": "Here x i and y i refer to the i th component of vectors x and y, whilex and\u0233 represent their mean.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation method", "sec_num": "3.5" }, { "text": "The correlation scores across all four emotions were averaged by the competition organizers to determine the bottom-line metric by which the submissions were ranked.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation method", "sec_num": "3.5" }, { "text": "Our experiments on the train and development are designed as follows: first, we compare the individual tweet embedding methods (Section 4.1) and examine which setup gives the best results. Then, in Section 4.2, we also involve the emotional lexicons, either independently, by appending them to tweet embedding vectors and in ensembles.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "4" }, { "text": "In this subsection, we describe the process of detecting the best data cleaning method and the best k parameter value for each emotion dataset and each embedding. The results are shown in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 188, "end": 195, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Detecting the best setup for embeddings", "sec_num": "4.1" }, { "text": "In the first step, for each emotion and each embedding, we calculate the PCC for different versions of the dataset: original raw tweets, preprocessed tweets, and preprocessed tweets with stopwords removed. To verify which approach works better, we perform statistical analysis using the twosided t-test in Python's package stats .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the best setup for embeddings", "sec_num": "4.1" }, { "text": "In the second step, we repeat the experiments for the best preprocessing setups with different amounts of neighbours (5, 7, 9, ..., 23) to detect the most appropriate k value. These values and the resulting PCC for the optimal setup are shown in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 246, "end": 253, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Detecting the best setup for embeddings", "sec_num": "4.1" }, { "text": "We can observe that stop-word cleaning only improved results for the Word2Vec embedding and that for the roBERTa-based model, it makes sense to use the raw tweets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the best setup for embeddings", "sec_num": "4.1" }, { "text": "Also, among the different embeddings, roBERTa obtains the highest results on three out of four datasets, while for Fear, the best result is obtained by DeepMoji (with roBERTa a close second). This can be explained by the fact that these two embeddings are explicitly trained on emotion data. The three remaining embeddings lag considerably, with the notable exception of USE for Fear. We conjecture that this may have to do with the imbalanced nature of the fear dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the best setup for embeddings", "sec_num": "4.1" }, { "text": "To measure the balance of the different datasets, we calculated the Imbalance Ratio (IR) of the combined train and development data, where IR is equal to the ratio of the sizes of the largest and smallest classes in the dataset. A value close to 1 represents balanced data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the best setup for embeddings", "sec_num": "4.1" }, { "text": "With IR values of 1.677 and 1.47, respectively, the anger and joy datasets can be considered fairly balanced. While the imbalance is somewhat higher for the sadness dataset (IR = 2.2), the fear dataset is the most imbalanced dataset among with a IR value of 8.04.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Detecting the best setup for embeddings", "sec_num": "4.1" }, { "text": "In this subsection, we discuss our experiments joining the previously identified best setups of the embedding methods with all emotional lexicons, using the different combination strategies outlined in Section 3.3.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Combining embeddings and lexicons", "sec_num": "4.2" }, { "text": "We first evaluate models based purely on lexicons. The goal here is to check the intrinsic classification strength of each lexicon and of the lexicon-based approach as a whole.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon-only based models", "sec_num": "4.2.1" }, { "text": "A lexicon works as a dictionary: if a word is present in the lexicon, it receives a particular score, in the other case, it is assigned a score of zero. For lexicons with several scores per word, we take all of them. To obtain the lexicon score for a full tweet, as usual, we compute the mean of its words' scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon-only based models", "sec_num": "4.2.1" }, { "text": "For each of the five lexicons, the output is saved as a separate vector. The sixth vector is constructed by combining all lexicons' scores and has a total length of 86 values (the sum of the number of scores for all five lexicons). For each of these vectors, a weighted kNN classification model is applied.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Lexicon-only based models", "sec_num": "4.2.1" }, { "text": "Initially, we use the same number of neighbours for all datasets, computed using the rule of thumb k = \u221a N /2, where N is the size of the dataset. The dataset sizes are mostly near to 2000 instances, so k = 23 is used. Results are presented in Table 2 . We can observe that the AI lexicon is the best performing lexicon, showing the highest results for two out of the four datasets.", "cite_spans": [], "ref_spans": [ { "start": 244, "end": 251, "text": "Table 2", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Lexicon-only based models", "sec_num": "4.2.1" }, { "text": "Then, for each emotion dataset and its best performing lexicon, the best k value is detected. These results are presented in Table 3 . As we can see, for different datasets, different values of k perform better.", "cite_spans": [], "ref_spans": [ { "start": 125, "end": 132, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Lexicon-only based models", "sec_num": "4.2.1" }, { "text": "In this approach, embedding and lexicon scores are normalized to values between 0 and 1 to account for differences in ranges. To obtain the vector of a tweet, we take the average of all vectors of its words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models appending lexicon scores to word embeddings", "sec_num": "4.2.2" }, { "text": "The results of these combination experiments are provided in Table 4 . To check the appending strategy's added value, Table 4 also presents the previously obtained PCC score using none of the lexicons for each embedding method.", "cite_spans": [], "ref_spans": [ { "start": 61, "end": 68, "text": "Table 4", "ref_id": "TABREF3" }, { "start": 118, "end": 125, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Models appending lexicon scores to word embeddings", "sec_num": "4.2.2" }, { "text": "As can be seen, for half of the experiments, the use of lexicons does not improve the PCC value. The roBERTa-based model is the only model that seems to benefit from the added lexicon information for each emotion dataset, although the improvement is marginal.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Models appending lexicon scores to word embeddings", "sec_num": "4.2.2" }, { "text": "For three out of four datasets, for the roBERTabased model, the EMOLEX lexicon was the best. For other embedding models, mostly approach with no lexicon benefited, and when some lexicons improved results, they were different for different datasets, with no noticeable pattern. If we compare the best lexicons from Table 4 with the best ones from Table 3 , we can see that they are different for each dataset.", "cite_spans": [], "ref_spans": [ { "start": 314, "end": 354, "text": "Table 4 with the best ones from Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Models appending lexicon scores to word embeddings", "sec_num": "4.2.2" }, { "text": "The first ensemble that we tried combines the five classifiers based on embedding models from Section 4.1, i.e., the roBERTa-based, DeepMoji, USE, SBERT, and Word2Vec embeddings. We train the weighted kNN models for each vector separately with the best k value and tweet preprocessing pipeline. Results are listed in the first line of Table 5 and indicate that these five embeddings already provide a good baseline, improving the best results from Table 1 by 8% on average. Especially for Fear, the improvement is notable (18% up).", "cite_spans": [], "ref_spans": [ { "start": 335, "end": 343, "text": "Table 5", "ref_id": "TABREF4" }, { "start": 449, "end": 456, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Ensembles", "sec_num": "4.2.3" }, { "text": "Next, we consider the inclusion of lexicons into the ensembles. In a first setup, for each dataset, we take the best performing lexicon from Table 3 and add this is as a separate classifier to the baseline ensemble. For comparison, we also consider a setup where all five lexicons and their combination are added as six more classifiers, to check how each of them influences the output scores. The obtained results, shown in the second and third lines of Table 5 , illustrate that, in general, the lexicons are unable to improve the baseline and that adding all lexicons takes the scores down considerably.", "cite_spans": [], "ref_spans": [ { "start": 141, "end": 148, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 455, "end": 463, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Ensembles", "sec_num": "4.2.3" }, { "text": "Given that the roBERTa-based method performs the best among all embeddings (Table 1) , and that it is the only one that benefits from the lexicon appending strategy (Table 4) , we also consider two additional setups. One that extends the baseline with the lexicon-appended roBERTa classifier and another one that adds the best lexicon to the previous ensemble.", "cite_spans": [], "ref_spans": [ { "start": 75, "end": 84, "text": "(Table 1)", "ref_id": "TABREF0" }, { "start": 165, "end": 174, "text": "(Table 4)", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Ensembles", "sec_num": "4.2.3" }, { "text": "The last two approaches results are presented in the second half of Table 5 . We can see that these adjustments improve the scores noticeably. For three out of four emotion datasets, the last setup performs best, while for Sadness, the results are almost equal. Therefore, we consider this last ensemble as the best solution.", "cite_spans": [], "ref_spans": [ { "start": 68, "end": 75, "text": "Table 5", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Ensembles", "sec_num": "4.2.3" }, { "text": "To determine the generalization strength of the obtained best approach from Section 4, we evaluated it on the test data by submitting predicted labels in the required format to the competition page 11 . PCC scores were calculated for each emotion dataset, and obtained results were averaged. Because of the mean voting function in our model's ensemble, our predicted labels are in float format. Therefore, to match the requested format on the competition page, before submitting results, we rounded them to the nearest integer label. The obtained PCC scores are shown in Table 6 , together with the results obtained on the training and development data for comparison. As expected, the average PCC for the test data drops several points compared to the training and development data, but, in general, our proposal appears to generalize well to new data.", "cite_spans": [], "ref_spans": [ { "start": 571, "end": 578, "text": "Table 6", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Results on the test data", "sec_num": "5" }, { "text": "We also mention that the first three contestants of the SemEval 2018 competition obtained a PCC equal to 0.695, 0.653 and 0.646, respectively 12 , and that our proposal would therefore be just behind them in fourth position.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results on the test data", "sec_num": "5" }, { "text": "To illustrate our approach's explainability, in this section we explore some correctly and wrongly predicted test samples.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and error analysis", "sec_num": "6" }, { "text": "As an example of a correct prediction, we can take a look at an anger test tweet: \"I know you mean well, but I'm offended. Prick.\" with real anger class \"2\". Our best model predicted label 2.4, which was rounded to 2, so our result is correct. To analyze how this label is obtained, we look at the predictions by all models separately. They are shown in Table 7 (sample (a)) with the number of neighbors of each class from the training data selected by each model. We can see that the roBERTa-based model was the most accurate, while most others were also close enough.", "cite_spans": [], "ref_spans": [ { "start": 354, "end": 361, "text": "Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Discussion and error analysis", "sec_num": "6" }, { "text": "Next, we also examined the neighbours chosen by the models and their classes, especially those which are selected by different models. To find some patterns, we took the intersection of the neighbours closest to the test instance, chosen by the ensemble's models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and error analysis", "sec_num": "6" }, { "text": "We should mention that those models are based on different embeddings, which may locate tweets in n-dimensional space differently. However, one tweet with class \"2\" from the train data was chosen by 4 models out of 7 and five more tweets (four of them with class \"2\" and one with class \"1\") by 3 models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and error analysis", "sec_num": "6" }, { "text": "A closer examination of those tweets revealed that all of them contain the word \"offended\". From this, we could conclude that this word has a high emotional intensity that influences the sentence's tone. Model in ensemble k Classes 0 1 2 3 0 1 2 3 Sample (a) Sample (b) roBERTa 19 0 4 11 4 5 3 11 0 DeepMoji 11 0 0 5 6 2 2 5 2 USE 19 2 5 7 5 5 2 7 5 SBERT 21 6 5 6 4 8 8 3 2 Word2Vec 5 1 1 0 3 0 0 3 2 AI lexicon 11 2 1 3 5 2 5 4 0 roBERTa with AI 11 0 2 8 1 0 4 7 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and error analysis", "sec_num": "6" }, { "text": "The next sample we examined is another anger test tweet, with gold label \"0\": \"We've been broken up a while, both moved on, she's got a kid, I don't hold any animosity towards her anymore...\" Our solution predicted a score 1.5, which was rounded to 2, leading to a false prediction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and error analysis", "sec_num": "6" }, { "text": "Similar to the previous sample, we took a look at the classes predicted by the different models in the ensemble (Table 7 , sample (b)). Here, we can observe that only the SBERT-based model predicted the result correctly, so roBERTa does not always provide the best answer.", "cite_spans": [], "ref_spans": [ { "start": 112, "end": 120, "text": "(Table 7", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Discussion and error analysis", "sec_num": "6" }, { "text": "We also explored the most frequent neighbours, which were chosen by 3 models (one tweet with class \"1\") and by 2 models (nine tweets with different classes). We did not find any noticeable patterns; the misclassification is probably caused by words with high emotional intensity, like \"animosity\", which is used in combination with a negation in this specific context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and error analysis", "sec_num": "6" }, { "text": "In this paper, we evaluate an explainable machine learning method application for the emotion detection task. As the main conclusion, we can say that using simple optimizations and the weighted kNN method can perform nearly on par with more complex state-of-the-art neural network-based approaches. In the future, we plan to incorporate more elaborate nearest neighbour methodologies, which also take into account the inherently fuzzy nature of emotion data. Some initial experiments with ordered weighted average based fuzzy rough sets (Cornelis et al., 2010) show promising results.", "cite_spans": [ { "start": 537, "end": 560, "text": "(Cornelis et al., 2010)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future work", "sec_num": "7" }, { "text": "Another observation that can be made from our results is that the most informative input to solving the emotion detection task is provided by the tweet embeddings, and that lexicons generally do not improve the results a lot. Meanwhile, adding the combined vector of roBERTa embedding and the best lexicon scores increased PCC scores noticeably. As a possible further improvement, we may refine the voting function by assigning different weights to the different members of the ensemble, which can be based, for example, on the confidence scores.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future work", "sec_num": "7" }, { "text": "Furthermore, as another strategy to improve results, additional text preprocessing steps could be performed, for example, using exclamation marks or word lemmatization. Also, we can give more weight to the hashtag and emoji descriptions during the tweet embedding process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future work", "sec_num": "7" }, { "text": "Another important characteristic that influences the results is data imbalance. As observed, we obtained the lowest PCC scores on the Fear dataset, most likely because it is the most imbalanced one. For further experiments with Fear, we consider the usage of imbalanced machine learning classification methods. In particular, Vluymans (2019) discusses several approaches based on fuzzy rough set theory.", "cite_spans": [ { "start": 326, "end": 341, "text": "Vluymans (2019)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future work", "sec_num": "7" }, { "text": "Finally, Danilevsky et al. (2020) provide several hints to investigate and improve solution explainability. For example, we can examine feature importance, measure the quality of explainability, etc.", "cite_spans": [ { "start": 9, "end": 33, "text": "Danilevsky et al. (2020)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and future work", "sec_num": "7" }, { "text": "Competition results: https://competitions. codalab.org/competitions/17751#results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://en.wikipedia.org/wiki/List_ of_emoticons 4 https://pypi.org/project/emoji/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/dnanhkhoa/ pytorch-pretrained-BERT/blob/master/ examples/extract_features.py", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://competitions.codalab. org/competitions/17751#learn_the_ details-evaluation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://competitions.codalab.org/ competitions/17751#results", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by the Odysseus programme of the Research Foundation-Flanders (FWO).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgment", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "TweetEval: Unified benchmark and comparative evaluation for tweet classification", "authors": [ { "first": "Francesco", "middle": [], "last": "Barbieri", "suffix": "" }, { "first": "Jose", "middle": [], "last": "Camacho-Collados", "suffix": "" }, { "first": "Luis", "middle": [], "last": "Espinosa Anke", "suffix": "" }, { "first": "Leonardo", "middle": [], "last": "Neves", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "1644--1650", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.148" ] }, "num": null, "urls": [], "raw_text": "Francesco Barbieri, Jose Camacho-Collados, Luis Es- pinosa Anke, and Leonardo Neves. 2020. TweetE- val: Unified benchmark and comparative evaluation for tweet classification. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2020, pages 1644-1650, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Affective norms for english words (anew): Instruction manual and affective ratings", "authors": [ { "first": "M", "middle": [], "last": "Margaret", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Bradley", "suffix": "" }, { "first": "", "middle": [], "last": "Lang", "suffix": "" } ], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Margaret M Bradley and Peter J Lang. 1999. Affective norms for english words (anew): Instruction manual and affective ratings. Technical report, Technical re- port C-1, the center for research in psychophysiol- ogy.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Universal sentence encoder for English", "authors": [ { "first": "Daniel", "middle": [], "last": "Cer", "suffix": "" }, { "first": "Yinfei", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Sheng-Yi", "middle": [], "last": "Kong", "suffix": "" }, { "first": "Nan", "middle": [], "last": "Hua", "suffix": "" }, { "first": "Nicole", "middle": [], "last": "Limtiaco", "suffix": "" }, { "first": "Rhomni", "middle": [], "last": "St", "suffix": "" }, { "first": "Noah", "middle": [], "last": "John", "suffix": "" }, { "first": "Mario", "middle": [], "last": "Constant", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Guajardo-Cespedes", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Yuan", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Tar", "suffix": "" }, { "first": "Ray", "middle": [], "last": "Strope", "suffix": "" }, { "first": "", "middle": [], "last": "Kurzweil", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "169--174", "other_ids": { "DOI": [ "10.18653/v1/D18-2029" ] }, "num": null, "urls": [], "raw_text": "Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder for English. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing: System Demonstrations, pages 169-174, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Ordered weighted average based fuzzy rough sets", "authors": [ { "first": "Chris", "middle": [], "last": "Cornelis", "suffix": "" }, { "first": "Nele", "middle": [], "last": "Verbiest", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Jensen", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 5th International Conference on Rough Sets and Knowledge Technology (RSKT 2010)", "volume": "", "issue": "", "pages": "78--85", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chris Cornelis, Nele Verbiest, and Richard Jensen. 2010. Ordered weighted average based fuzzy rough sets. In Proceedings of the 5th International Con- ference on Rough Sets and Knowledge Technology (RSKT 2010), pages 78-85.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A survey of the state of explainable AI for natural language processing", "authors": [ { "first": "Marina", "middle": [], "last": "Danilevsky", "suffix": "" }, { "first": "Ranit", "middle": [], "last": "Kun Qian", "suffix": "" }, { "first": "Yannis", "middle": [], "last": "Aharonov", "suffix": "" }, { "first": "Ban", "middle": [], "last": "Katsis", "suffix": "" }, { "first": "Prithviraj", "middle": [], "last": "Kawas", "suffix": "" }, { "first": "", "middle": [], "last": "Sen", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "447--459", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marina Danilevsky, Kun Qian, Ranit Aharonov, Yan- nis Katsis, Ban Kawas, and Prithviraj Sen. 2020. A survey of the state of explainable AI for natural lan- guage processing. In Proceedings of the 1st Con- ference of the Asia-Pacific Chapter of the Associa- tion for Computational Linguistics and the 10th In- ternational Joint Conference on Natural Language Processing, pages 447-459, Suzhou, China. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "The distance-weighted k-nearest-neighbor rule", "authors": [ { "first": "A", "middle": [], "last": "Sahibsingh", "suffix": "" }, { "first": "", "middle": [], "last": "Dudani", "suffix": "" } ], "year": 1976, "venue": "IEEE Transactions on Systems, Man, and Cybernetics", "volume": "", "issue": "", "pages": "325--327", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sahibsingh A Dudani. 1976. The distance-weighted k-nearest-neighbor rule. IEEE Transactions on Sys- tems, Man, and Cybernetics, (4):325-327.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "SeerNet at SemEval-2018 task 1: Domain adaptation for affect in tweets", "authors": [ { "first": "Venkatesh", "middle": [], "last": "Duppada", "suffix": "" }, { "first": "Royal", "middle": [], "last": "Jain", "suffix": "" }, { "first": "Sushant", "middle": [], "last": "Hiray", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "18--23", "other_ids": { "DOI": [ "10.18653/v1/S18-1002" ] }, "num": null, "urls": [], "raw_text": "Venkatesh Duppada, Royal Jain, and Sushant Hiray. 2018. SeerNet at SemEval-2018 task 1: Domain adaptation for affect in tweets. In Proceedings of The 12th International Workshop on Semantic Eval- uation, pages 18-23, New Orleans, Louisiana. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Richard Socher, and Caiming Xiong. 2020. Explaining and improving model behavior with k nearest neighbor representations. arXiv eprints", "authors": [ { "first": "Ben", "middle": [], "last": "Nazneen Fatema Rajani", "suffix": "" }, { "first": "Wengpeng", "middle": [], "last": "Krause", "suffix": "" }, { "first": "Tong", "middle": [], "last": "Yin", "suffix": "" }, { "first": "", "middle": [], "last": "Niu", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nazneen Fatema Rajani, Ben Krause, Wengpeng Yin, Tong Niu, Richard Socher, and Caiming Xiong. 2020. Explaining and improving model behavior with k nearest neighbor representations. arXiv e- prints, pages arXiv-2010.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "psyML at SemEval-2018 task 1: Transfer learning for sentiment and emotion analysis", "authors": [ { "first": "Grace", "middle": [], "last": "Gee", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "369--376", "other_ids": { "DOI": [ "10.18653/v1/S18-1056" ] }, "num": null, "urls": [], "raw_text": "Grace Gee and Eugene Wang. 2018. psyML at SemEval-2018 task 1: Transfer learning for senti- ment and emotion analysis. In Proceedings of The 12th International Workshop on Semantic Evalua- tion, pages 369-376, New Orleans, Louisiana. As- sociation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Similarity measures for text document clustering", "authors": [ { "first": "Anna", "middle": [], "last": "Huang", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the sixth new zealand computer science research student conference (NZCSRSC2008)", "volume": "4", "issue": "", "pages": "9--56", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Huang. 2008. Similarity measures for text doc- ument clustering. In Proceedings of the sixth new zealand computer science research student confer- ence (NZCSRSC2008), Christchurch, New Zealand, volume 4, pages 9-56.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Towards explainable NLP: A generative explanation framework for text classification", "authors": [ { "first": "Hui", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Qingyu", "middle": [], "last": "Yin", "suffix": "" }, { "first": "William", "middle": [ "Yang" ], "last": "Wang", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5570--5581", "other_ids": { "DOI": [ "10.18653/v1/P19-1560" ] }, "num": null, "urls": [], "raw_text": "Hui Liu, Qingyu Yin, and William Yang Wang. 2019. Towards explainable NLP: A generative explanation framework for text classification. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5570-5581, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Obtaining reliable human ratings of valence, arousal, and dominance for", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2018, "venue": "", "volume": "20", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P18-1017" ] }, "num": null, "urls": [], "raw_text": "Saif Mohammad. 2018a. Obtaining reliable human rat- ings of valence, arousal, and dominance for 20,000", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "English words", "authors": [], "year": null, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "174--184", "other_ids": { "DOI": [ "10.18653/v1/P18-1017" ] }, "num": null, "urls": [], "raw_text": "English words. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 174- 184, Melbourne, Australia. Association for Compu- tational Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "SemEval-2018 task 1: Affect in tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Felipe", "middle": [], "last": "Bravo-Marquez", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Salameh", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1--17", "other_ids": { "DOI": [ "10.18653/v1/S18-1001" ] }, "num": null, "urls": [], "raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval- 2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Eval- uation, pages 1-17, New Orleans, Louisiana. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Word affect intensities", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 11th Edition of the Language Resources and Evaluation Conference (LREC-2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad. 2018b. Word affect intensities. In Proceedings of the 11th Edition of the Language Re- sources and Evaluation Conference (LREC-2018), Miyazaki, Japan.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Using hashtags to capture fine emotion categories from tweets", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2015, "venue": "Computational Intelligence", "volume": "31", "issue": "2", "pages": "301--326", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M Mohammad and Svetlana Kiritchenko. 2015. Using hashtags to capture fine emotion cate- gories from tweets. Computational Intelligence, 31(2):301-326.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Crowdsourcing a word-emotion association lexicon", "authors": [ { "first": "M", "middle": [], "last": "Saif", "suffix": "" }, { "first": "", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "D", "middle": [], "last": "Peter", "suffix": "" }, { "first": "", "middle": [], "last": "Turney", "suffix": "" } ], "year": 2013, "venue": "", "volume": "29", "issue": "", "pages": "436--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif M. Mohammad and Peter D. Turney. 2013. Crowdsourcing a word-emotion association lexicon. 29(3):436-465.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Sentence-BERT: Sentence embeddings using Siamese BERTnetworks", "authors": [ { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Iryna", "middle": [], "last": "Gurevych", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "3982--3992", "other_ids": { "DOI": [ "10.18653/v1/D19-1410" ] }, "num": null, "urls": [], "raw_text": "Nils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Amobee at SemEval-2018 task 1: GRU neural network with a CNN attention mechanism for sentiment classification", "authors": [ { "first": "Alon", "middle": [], "last": "Rozental", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Fleischer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "218--225", "other_ids": { "DOI": [ "10.18653/v1/S18-1033" ] }, "num": null, "urls": [], "raw_text": "Alon Rozental and Daniel Fleischer. 2018. Amobee at SemEval-2018 task 1: GRU neural network with a CNN attention mechanism for sentiment classifica- tion. In Proceedings of The 12th International Work- shop on Semantic Evaluation, pages 218-225, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Dealing with imbalanced and weakly labelled data in machine learning using fuzzy and rough set methods", "authors": [ { "first": "Sarah", "middle": [], "last": "Vluymans", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sarah Vluymans. 2019. Dealing with imbalanced and weakly labelled data in machine learning using fuzzy and rough set methods. Springer.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Norms of valence, arousal, and dominance for 13,915 english lemmas. Behavior research methods", "authors": [ { "first": "Amy", "middle": [ "Beth" ], "last": "Warriner", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Kuperman", "suffix": "" }, { "first": "Marc", "middle": [], "last": "Brysbaert", "suffix": "" } ], "year": 2013, "venue": "", "volume": "45", "issue": "", "pages": "1191--1207", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amy Beth Warriner, Victor Kuperman, and Marc Brys- baert. 2013. Norms of valence, arousal, and dom- inance for 13,915 english lemmas. Behavior re- search methods, 45(4):1191-1207.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Emotion analysis of twitter data that use emoticons and emoji ideograms", "authors": [ { "first": "Wies\u0142aw", "middle": [], "last": "Wolny", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wies\u0142aw Wolny. 2016. Emotion analysis of twitter data that use emoticons and emoji ideograms.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "The scheme of the ensemble architecture.", "num": null, "uris": null, "type_str": "figure" }, "TABREF0": { "type_str": "table", "html": null, "text": "The best setup for each emotion for different embeddings.", "num": null, "content": "
SetupAngerJoySadness Fear
roBERTa-based
Tweets preprocessingNoNoNoYes
Stop-words cleaningNoNoNoNo
Number of neighbors1913911
PCC0.6651 0.6919 0.7055 0.5694
DeepMoji
Tweets preprocessingYesYesYesYes
Stop-words cleaningYesNoNoNo
Number of neighbors11211313
PCC0.6190 0.6426 0.6490 0.5737
USE
Tweets preprocessingYesYesYesNo
Stop-words cleaningNoNoNoNo
Number of neighbors19211911
PCC0.5174 0.5580 0.6067 0.5589
SBERT
Tweets preprocessingYesYesYesYes
Stop-words cleaningNoNoNoNo
Number of neighbors2192113
PCC0.4946 0.5413 0.5505 0.4608
Word2Vec
Tweets preprocessingYesYesYesYes
Stop-words cleaningYesYesYesYes
The number of neighbors5232113
PCC0.4824 0.4791 0.5136 0.4303
" }, "TABREF1": { "type_str": "table", "html": null, "text": "Results for the lexicon-based approach.", "num": null, "content": "
LexiconAngerJoySadness Fear
VAD0.1983 0.2823 0.2043 0.0928
EMOLEX 0.3014 0.2893 0.3404 0.1943
AI0.3284 0.2673 0.3723 0.1549
ANEW0.1972 0.3050 0.3254 0.2278
Warriner0.1901 0.2705 0.2970 0.1505
Combined 0.2133 0.3051 0.3151 0.1626
" }, "TABREF2": { "type_str": "table", "html": null, "text": "The best setup for each emotion for different lexicon-based feature vectors.", "num": null, "content": "
Dataset Lexicon k value PCC
AngerAI110.3359
JoyCombined190.3320
SadnessAI230.3723
FearANEW170.2412
" }, "TABREF3": { "type_str": "table", "html": null, "text": "Results for the first combination approach.", "num": null, "content": "" }, "TABREF4": { "type_str": "table", "html": null, "text": "Results for the ensemble approach with different feature vectors, for all datasets.", "num": null, "content": "
The vectorsVector size AngerJoySadness Fear
The baseline (top-five embeddings vectors)50.6929 0.7420 0.7329 0.6783
With the best lexicon60.6902 0.7336 0.7400 0.6773
With all five lexicons
and their combination110.6431 0.6796 0.6962 0.6585
With roBERTa combined
with the best lexicon60.7120 0.7496 0.7579 0.6719
With the best lexicon and
roBERTa combined with
the best lexicon70.7190 0.7526 0.7566 0.6804
" }, "TABREF5": { "type_str": "table", "html": null, "text": "Pearson Coefficient of the best approach on the cross-validation and test data for the four emotion datasets.", "num": null, "content": "
DatasetTraining andTest data
development data
Anger0.7190.638
Joy0.7520.631
Sadness0.7560.670
Fear0.6800.601
Averaged scores0.7260.635
" }, "TABREF6": { "type_str": "table", "html": null, "text": "Predictions of models from the ensemble for some test tweets.", "num": null, "content": "" } } } }