ACL-OCL / Base_JSON /prefixW /json /wassa /2022.wassa-1.24.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:07:19.622034Z"
},
"title": "Leveraging Emotion-specific Features to Improve Transformer Performance for Emotion Classification",
"authors": [
{
"first": "Atharva",
"middle": [],
"last": "Kshirsagar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Savitribai Phule Pune University",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Shaily",
"middle": [],
"last": "Desai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Savitribai Phule Pune University",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Aditi",
"middle": [],
"last": "Sidnerlikar",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Savitribai Phule Pune University",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Nikhil",
"middle": [],
"last": "Khodake",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Savitribai Phule Pune University",
"location": {
"country": "India"
}
},
"email": "[email protected]"
},
{
"first": "Manisha",
"middle": [],
"last": "Marathe",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Savitribai Phule Pune University",
"location": {
"country": "India"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes team PVG's AI Club's approach to the Emotion Classification shared task held at WASSA 2022. This Track 2 subtask focuses on building models which can predict a multi-class emotion label based on essays from news articles where a person, group or another entity is affected. Baseline transformer models have been demonstrating good results on sequence classification tasks, and we aim to improve this performance with the help of ensembling techniques, and by leveraging two variations of emotion-specific representations. We observe better results than our baseline models and achieve an accuracy of 0.619 and a macro F1 score of 0.520 on the emotion classification task.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes team PVG's AI Club's approach to the Emotion Classification shared task held at WASSA 2022. This Track 2 subtask focuses on building models which can predict a multi-class emotion label based on essays from news articles where a person, group or another entity is affected. Baseline transformer models have been demonstrating good results on sequence classification tasks, and we aim to improve this performance with the help of ensembling techniques, and by leveraging two variations of emotion-specific representations. We observe better results than our baseline models and achieve an accuracy of 0.619 and a macro F1 score of 0.520 on the emotion classification task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Rapid growth in the availability of humanannotated text documents has led to an increase in methodologies for tasks such as classification, clustering and knowledge extraction. A multitude of sources have enabled public access to structured and semi-structured data comprising of news stories, written repositories, blog content, among countless other roots of information. (Bostan and Klinger, 2018) showed that the task of emotion classification has emerged from being purely research oriented to being of vital importance in fields like dialog systems, intelligent agents, and analysis and diagnosis of mental disorders.",
"cite_spans": [
{
"start": 374,
"end": 400,
"text": "(Bostan and Klinger, 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Humans themselves sometimes find it tough to comprehend the various layers of subtlety in emotions, and hence there has been only a limited amount of prior research revolving around emotion classification. It has been noted that larger deep learning models can also find it quite challenging to fully grasp the nuances and underlying context of human emotion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With the advent of Transformer (Vaswani et al., 2017) models, there has been an increase in performance for emotion classification of text-based models. Most transformer-based language models (Devlin et al., 2018; Raffel et al., 2019; Radford et al., 2018) are pretrained on various self-supervised objectives. Combining transformer based sentence representations with domain-specialised representations for improving performance on the specific task has been successfully used in across many NLP domains (Peinelt et al., 2020; Poerner et al., 2020; Zhang et al., 2021) . Building on these foundations, we propose a similar approach to the task of Emotion classification.",
"cite_spans": [
{
"start": 31,
"end": 53,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 192,
"end": 213,
"text": "(Devlin et al., 2018;",
"ref_id": "BIBREF3"
},
{
"start": 214,
"end": 234,
"text": "Raffel et al., 2019;",
"ref_id": "BIBREF14"
},
{
"start": 235,
"end": 256,
"text": "Radford et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 505,
"end": 527,
"text": "(Peinelt et al., 2020;",
"ref_id": null
},
{
"start": 528,
"end": 549,
"text": "Poerner et al., 2020;",
"ref_id": "BIBREF12"
},
{
"start": 550,
"end": 569,
"text": "Zhang et al., 2021)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "*Equal Contribution",
"sec_num": null
},
{
"text": "In this paper, we posit a solution to the WASSA 2022 Shared Task on Empathy Detection, Emotion Classification and Personality Detection, specifically Track-2, emotion classification. We propose a hybrid model where we combine information from various entities to create a rich final representation of each datapoint, and the observed results show promise in combining the Transformer output with the emotion-specific embeddings and NRC features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "*Equal Contribution",
"sec_num": null
},
{
"text": "The rest of the paper is organized in the following manner: Section 2 offers an overview into the dataset on Empathetic concern in news stories, Section 3 goes in depth about our proposed methodology with subsections describing the individual constituent modules. Section 4 explains the experimental and training setup along with the baselines used; Section 5 elucidates the observed results, and Section 6 concludes this study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "*Equal Contribution",
"sec_num": null
},
{
"text": "The dataset provided by the organizers consists of 1860 essays in the training set, 270 in the dev set and 525 in the test set. Each of these essays has been annotated for empathy and empathy scores, distress and distress scores, emotion, personality feature and interpersonal reactivity features. Since this paper describes an approach only to the Emotion classification task, we shall only describe the data for said subtask. Each essay has been assigned an emotion class similar to classes in (Ekman, 1992) . Table 1 provides a description on the training, validation and testing subset, and Figure 1 shows the distribution of the training data among the various emotion classes. We make use of the pretrained RoBERTa base model for this task. RoBERTa provides contextualized essay-level representations which can capture context sensitive information better than static representations. For each essay E in our corpus, we obtain a 768 dimensional representation R, encoded using the CLS token in the final hidden layer of the RoBERTa base model. We further process this representation R with Linear and Dropout layers before concatenating it with our emotion-specific representations. 3.2 Emotion-Enriched Word Embeddings(EWE) (Labutov and Lipson, 2013; Bansal et al., 2014) ,argue that the effectiveness of word embeddings is highly task dependent. To obtain word embeddings specific for emotion classification, we used the emotion-enriched embeddings from (Agrawal et al., 2018) . The weight matrix was made by mapping the vocabulary from our dataset to the 300 dimensional corresponding vector in the pre-trained embedding file. Each essay was mapped to the embedding matrix into a final representation shape of (100,300). This representation was passed through 2 Conv1d and 2 Maxpool layers to obtain a 16 dimensional feature vector C \u2208 R d2 .",
"cite_spans": [
{
"start": 496,
"end": 509,
"text": "(Ekman, 1992)",
"ref_id": "BIBREF4"
},
{
"start": 1231,
"end": 1257,
"text": "(Labutov and Lipson, 2013;",
"ref_id": "BIBREF6"
},
{
"start": 1258,
"end": 1278,
"text": "Bansal et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 1462,
"end": 1484,
"text": "(Agrawal et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 512,
"end": 519,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 595,
"end": 603,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R = RoBERT a(E) \u2208 R d1",
"eq_num": "(1)"
}
],
"section": "Set Essays",
"sec_num": null
},
{
"text": "The NRC emotion intensity lexicon (Mohammad, 2018 ) is a collection of close to 10, 000 words associated with a distinct real valued intensity score assigned for eight basic emotions. Incorporating this lexicon in classification tasks has been proven to boost performance (Kulkarni et al., 2021) . Of the 8 basic emotions in the lexicon, 6 emotions-anger, joy, sadness, disgust, fear and surprise coincide with the given dataset and hence lexical features for only these features were considered. For every essay in the dataset, we calculate the value for one emotion by summing the individual scores for Table 2 : Resulting metrics on baseline models as compared to our methodology every word W in the essay that occurs in the NRC lexicon. We then create a six dimensional vector N corresponding to that essay which consists of the scores of the emotions in our dataset.",
"cite_spans": [
{
"start": 34,
"end": 49,
"text": "(Mohammad, 2018",
"ref_id": "BIBREF10"
},
{
"start": 272,
"end": 295,
"text": "(Kulkarni et al., 2021)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [
{
"start": 605,
"end": 612,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "NRC Representation",
"sec_num": "3.3"
},
{
"text": "For a datapoint E, the six values of S emotion and the feature vector N was constructed in the following manner:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NRC Representation",
"sec_num": "3.3"
},
{
"text": "S emotion = W emotion (W \u2208 E) (2) N = [S anger ; S joy ; .....; S surprise ] \u2208 R d3 (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NRC Representation",
"sec_num": "3.3"
},
{
"text": "The feature vectors obtained from the RoBERTa (R), Emotion-Enriched Embeddings (C) and NRC (N )were concatenated to obtain the final representation (F ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined Representation and Classification",
"sec_num": "3.4"
},
{
"text": "F = [R; C; N ] \u2208 R d1+d2+d3 (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combined Representation and Classification",
"sec_num": "3.4"
},
{
"text": "This representation is then passed through a single Linear layer with the Softmax activation. Figure 2 depicts the model architecture in detail.",
"cite_spans": [],
"ref_spans": [
{
"start": 94,
"end": 100,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Combined Representation and Classification",
"sec_num": "3.4"
},
{
"text": "Standard text cleaning steps like removing numbers, special characters, punctuation, accidental spaces, etc. were applied to each essay in the corpus. Stopwords were removed using the nltk (Loper and Bird, 2002) library. Every essay was tokenized to a maximum length of 100, and essays larger than this length were truncated. No standardization was done in the case of NRC scores, as we wanted to feed our model a vector of raw emotion-intensity scores for each of the six emotions considered in our NRC representation.",
"cite_spans": [
{
"start": 189,
"end": 211,
"text": "(Loper and Bird, 2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "We used the pretrained 'roberta-base' model from the Huggingface Transformers 1 library. All other modules used in our methodology were built using PyTorch. As observed by (Kulkarni et al., 2021) , we also found that the Hyperbolic Tangent(Tanh) activation function worked better than ReLU, and hence we used the Tanh activation for all layers in our model. The model was trained using an AdamW optimizer (Loshchilov and Hutter, 2019 ) with a learning rate of 0.001 and beta values set to \u03b2 1 = 0.9, \u03b2 2 = 0.99 and the loss used was cross entropy loss. Additionally, early stopping was used if the validation loss does not decrease after 10 successive epochs. The batch size was set to 64 for both Baseline models as well as the proposed model. A single Nvidia P100-16GB GPU provided by Google Colab was used to train all models.",
"cite_spans": [
{
"start": 172,
"end": 195,
"text": "(Kulkarni et al., 2021)",
"ref_id": "BIBREF5"
},
{
"start": 405,
"end": 433,
"text": "(Loshchilov and Hutter, 2019",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Setup",
"sec_num": "4.2"
},
{
"text": "Our goal in this work is to examine if concatenating emotional-specific features to pre-existing transformer models leads to an increase in the emotion classification performance of these models. Hence, we compare our proposed methodology to the vanilla RoBERTa model, as well as RoBERTa + Ewe for the emotion classification subtask.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "The results for the emotion prediction task on the validation set are given in Table 2 . There was no use of validation data during the training process, and the provided validation data was used as unseen testing data to benchmark the models.The official metric for Track 2 of the shared task was the macro F1 score. To ensure fair comparison, the validation set results have been averaged over 3 runs for each model. The proposed model shows a 7% increase in macro F1 scores and 8% increase in ac-curacy over the vanilla RoBERTa model. The proposed model also shows the effectiveness of adding the NRC representations described in section 3.3 as it performs slightly better than the RoBERTa + Emotion Enriched word embeddings model. We attribute this increase in performance to the taskspecific representations of essays used in our system. During the training process, it was observed that the performance of all models was highly susceptible to how they were initialized, and we received a large range of results across different seeds. As a result, a true assessment of our method can only be made in comparison to baseline models with the same seed, as we have done in this study.",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "5"
},
{
"text": "The goal of this study was to examine and enhance the performance of transformer models using only the Empathetic Concern in News Stories dataset that was provided to us, with the prospective of testing our method on a bigger dataset in the future. We proposed a model ensemble which combined the transformer feature vector with the emotion-intensive word embeddings along with the word-specific features obtained from the NRC lexicon. We demonstrate results that outperform the baseline vanilla RoBERTa model, and attest that combining domain-specific features can indeed improve performance on a task as involute as emotion classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://huggingface.co/transformers/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Learning emotion-enriched word representations",
"authors": [
{
"first": "Ameeta",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Aijun",
"middle": [],
"last": "An",
"suffix": ""
},
{
"first": "Manos",
"middle": [],
"last": "Papagelis",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "950--961",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ameeta Agrawal, Aijun An, and Manos Papagelis. 2018. Learning emotion-enriched word representations. In Proceedings of the 27th International Conference on Computational Linguistics, pages 950-961, Santa Fe, New Mexico, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Tailoring continuous word representations for dependency parsing",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Livescu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "809--815",
"other_ids": {
"DOI": [
"10.3115/v1/P14-2131"
]
},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for depen- dency parsing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 809-815, Baltimore, Maryland. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "An analysis of annotated corpora for emotion classification in text",
"authors": [
{
"first": "Laura-Ana-Maria",
"middle": [],
"last": "Bostan",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Klinger",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 27th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2104--2119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laura-Ana-Maria Bostan and Roman Klinger. 2018. An analysis of annotated corpora for emotion clas- sification in text. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 2104-2119, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "An argument for basic emotions",
"authors": [
{
"first": "Paul",
"middle": [],
"last": "Ekman",
"suffix": ""
}
],
"year": 1992,
"venue": "Cognition and Emotion",
"volume": "6",
"issue": "3-4",
"pages": "169--200",
"other_ids": {
"DOI": [
"10.1080/02699939208411068"
]
},
"num": null,
"urls": [],
"raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion, 6(3-4):169-200.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "PVG at WASSA 2021: A multi-input, multi-task, transformer-based architecture for empathy and distress prediction",
"authors": [
{
"first": "Atharva",
"middle": [],
"last": "Kulkarni",
"suffix": ""
},
{
"first": "Sunanda",
"middle": [],
"last": "Somwase",
"suffix": ""
},
{
"first": "Shivam",
"middle": [],
"last": "Rajput",
"suffix": ""
},
{
"first": "Manisha",
"middle": [],
"last": "Marathe",
"suffix": ""
}
],
"year": 2021,
"venue": "Proceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "105--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Atharva Kulkarni, Sunanda Somwase, Shivam Rajput, and Manisha Marathe. 2021. PVG at WASSA 2021: A multi-input, multi-task, transformer-based archi- tecture for empathy and distress prediction. In Pro- ceedings of the Eleventh Workshop on Computational Approaches to Subjectivity, Sentiment and Social Me- dia Analysis, pages 105-111, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Re-embedding words",
"authors": [
{
"first": "Igor",
"middle": [],
"last": "Labutov",
"suffix": ""
},
{
"first": "Hod",
"middle": [],
"last": "Lipson",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "489--493",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Igor Labutov and Hod Lipson. 2013. Re-embedding words. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 489-493, Sofia, Bul- garia. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Roberta: A robustly optimized bert pretraining approach",
"authors": [
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Jingfei",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Mandar",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. ArXiv, abs/1907.11692.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Nltk: The natural language toolkit",
"authors": [
{
"first": "Edward",
"middle": [],
"last": "Loper",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bird",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "63--70",
"other_ids": {
"DOI": [
"10.3115/1118108.1118117"
]
},
"num": null,
"urls": [],
"raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The natu- ral language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Com- putational Linguistics -Volume 1, ETMTNLP '02, page 63-70, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Decoupled weight decay regularization",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Loshchilov",
"suffix": ""
},
{
"first": "Frank",
"middle": [],
"last": "Hutter",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In International Confer- ence on Learning Representations.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Word affect intensities",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad. 2018. Word affect intensities. In Pro- ceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources As- sociation (ELRA).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "2020. tBERT: Topic models and BERT joining forces for semantic similarity detection",
"authors": [
{
"first": "Nicole",
"middle": [],
"last": "Peinelt",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7047--7055",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.630"
]
},
"num": null,
"urls": [],
"raw_text": "Nicole Peinelt, Dong Nguyen, and Maria Liakata. 2020. tBERT: Topic models and BERT joining forces for semantic similarity detection. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7047-7055, Online. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "E-BERT: Efficient-yet-effective entity embeddings for BERT",
"authors": [
{
"first": "Nina",
"middle": [],
"last": "Poerner",
"suffix": ""
},
{
"first": "Ulli",
"middle": [],
"last": "Waltinger",
"suffix": ""
},
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of the Association for Computational Linguistics: EMNLP 2020",
"volume": "",
"issue": "",
"pages": "803--818",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.71"
]
},
"num": null,
"urls": [],
"raw_text": "Nina Poerner, Ulli Waltinger, and Hinrich Sch\u00fctze. 2020. E-BERT: Efficient-yet-effective entity embeddings for BERT. In Findings of the Association for Compu- tational Linguistics: EMNLP 2020, pages 803-818, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improving language understanding by generative pre-training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter J",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1910.10683"
]
},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30. Curran Associates, Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Combining static word embeddings and contextual representations for bilingual lexicon induction",
"authors": [
{
"first": "Jinpeng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Baijun",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Nini",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yangbin",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Weihua",
"middle": [],
"last": "Luo",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
"volume": "",
"issue": "",
"pages": "2943--2955",
"other_ids": {
"DOI": [
"10.18653/v1/2021.findings-acl.260"
]
},
"num": null,
"urls": [],
"raw_text": "Jinpeng Zhang, Baijun Ji, Nini Xiao, Xiangyu Duan, Min Zhang, Yangbin Shi, and Weihua Luo. 2021. Combining static word embeddings and contextual representations for bilingual lexicon induction. In Findings of the Association for Computational Lin- guistics: ACL-IJCNLP 2021, pages 2943-2955, On- line. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "Figure 2: Model Architecture",
"type_str": "figure",
"num": null
},
"TABREF1": {
"content": "<table><tr><td>: Total datapoints for every set</td></tr></table>",
"text": "",
"type_str": "table",
"html": null,
"num": null
}
}
}
}