{ "paper_id": "2022", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:07:42.960463Z" }, "title": "On the Complementarity of Images and Text for the Expression of Emotions in Social Media", "authors": [ { "first": "Anna", "middle": [], "last": "Khlyzova", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of South Florida", "location": { "country": "USA" } }, "email": "anna.khlyzova@ims.uni-stuttgart.de" }, { "first": "Carina", "middle": [], "last": "Silberer", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": { "country": "Germany" } }, "email": "carina.silberer@ims.uni-stuttgart.de" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Stuttgart", "location": { "country": "Germany" } }, "email": "roman.klinger@ims.uni-stuttgart.de" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Authors of posts in social media communicate their emotions and what causes them with text and images. While there is work on emotion and stimulus detection for each modality separately, it is yet unknown if the modalities contain complementary emotion information in social media. We aim at filling this research gap and contribute a novel, annotated corpus of English multimodal Reddit posts. On this resource, we develop models to automatically detect the relation between image and text, an emotion stimulus category and the emotion class. We evaluate if these tasks require both modalities and find for the imagetext relations, that text alone is sufficient for most categories (complementary, illustrative, opposing): the information in the text allows to predict if an image is required for emotion understanding. The emotions of anger and sadness are best predicted with a multimodal model, while text alone is sufficient for disgust, joy, and surprise. Stimuli depicted by objects, animals, food, or a person are best predicted by image-only models, while multimodal models are most effective on art, events, memes, places, or screenshots. My everyday joy is to see my adorable cat smiles. And I've just realized, my cat can \"dance with music\". Amazing!", "pdf_parse": { "paper_id": "2022", "_pdf_hash": "", "abstract": [ { "text": "Authors of posts in social media communicate their emotions and what causes them with text and images. While there is work on emotion and stimulus detection for each modality separately, it is yet unknown if the modalities contain complementary emotion information in social media. We aim at filling this research gap and contribute a novel, annotated corpus of English multimodal Reddit posts. On this resource, we develop models to automatically detect the relation between image and text, an emotion stimulus category and the emotion class. We evaluate if these tasks require both modalities and find for the imagetext relations, that text alone is sufficient for most categories (complementary, illustrative, opposing): the information in the text allows to predict if an image is required for emotion understanding. The emotions of anger and sadness are best predicted with a multimodal model, while text alone is sufficient for disgust, joy, and surprise. Stimuli depicted by objects, animals, food, or a person are best predicted by image-only models, while multimodal models are most effective on art, events, memes, places, or screenshots. My everyday joy is to see my adorable cat smiles. And I've just realized, my cat can \"dance with music\". Amazing!", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The main task in emotion analysis in natural language processing is emotion classification into predefined sets of emotion categories, for instance, corresponding to basic emotions (fear, anger, joy, sadness, surprise, disgust, anticipation, and trust, Ekman, 1992; Plutchik, 1980) . In psychology, emotions are commonly considered a reaction to an event which consists of a synchronized change of organismic subsystems, namely neurophysiological changes, reactions, action tendencies, the subjective feeling, and a cognitive appraisal (Scherer et al., 2001) . These theories recently received increasing attention, for instance, by comparing the way how emotions are expressed, based on these components (Casel et al., 2021) , and by modelling emotions in dimensional models of affect (Buechel and Hahn, 2017) or appraisal (Hofmann et al., 2020) . Further, the acknowledgment of emotions as a reaction to some relevant event (Scherer, 2005) leads to the development of stimulus detection systems. This task is formulated in a tokenlabeling setup (Song and Meng, 2015; Bostan et al., 2020; Kim and Klinger, 2018; Ghazi et al., 2015; Oberl\u00e4nder and Klinger, 2020, i.a.) , as clause classification (Gui et al., , 2016 Gao et al., 2017; Xia and Ding, 2019; Oberl\u00e4nder and Klinger, 2020, i.a.) , or as a classification task into a predefined inventory of relevant stimuli (Mohammad et al., 2014) .", "cite_spans": [ { "start": 181, "end": 265, "text": "(fear, anger, joy, sadness, surprise, disgust, anticipation, and trust, Ekman, 1992;", "ref_id": null }, { "start": 266, "end": 281, "text": "Plutchik, 1980)", "ref_id": "BIBREF36" }, { "start": 536, "end": 558, "text": "(Scherer et al., 2001)", "ref_id": "BIBREF42" }, { "start": 705, "end": 725, "text": "(Casel et al., 2021)", "ref_id": "BIBREF7" }, { "start": 786, "end": 810, "text": "(Buechel and Hahn, 2017)", "ref_id": "BIBREF6" }, { "start": 824, "end": 846, "text": "(Hofmann et al., 2020)", "ref_id": "BIBREF19" }, { "start": 926, "end": 941, "text": "(Scherer, 2005)", "ref_id": "BIBREF41" }, { "start": 1047, "end": 1068, "text": "(Song and Meng, 2015;", "ref_id": "BIBREF46" }, { "start": 1069, "end": 1089, "text": "Bostan et al., 2020;", "ref_id": "BIBREF4" }, { "start": 1090, "end": 1112, "text": "Kim and Klinger, 2018;", "ref_id": "BIBREF20" }, { "start": 1113, "end": 1132, "text": "Ghazi et al., 2015;", "ref_id": "BIBREF14" }, { "start": 1133, "end": 1168, "text": "Oberl\u00e4nder and Klinger, 2020, i.a.)", "ref_id": null }, { "start": 1196, "end": 1215, "text": "(Gui et al., , 2016", "ref_id": "BIBREF16" }, { "start": 1216, "end": 1233, "text": "Gao et al., 2017;", "ref_id": "BIBREF13" }, { "start": 1234, "end": 1253, "text": "Xia and Ding, 2019;", "ref_id": "BIBREF52" }, { "start": 1254, "end": 1289, "text": "Oberl\u00e4nder and Klinger, 2020, i.a.)", "ref_id": null }, { "start": 1368, "end": 1391, "text": "(Mohammad et al., 2014)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In social media, users express emotions including text and images. Most attention has been devoted to Twitter, due to its easy-to-use API and popularity (Mohammad, 2012; Schuff et al., 2017; . However, this platform has a tendency to be text-focused, and has therefore not triggered too much attention towards other modalities. Although text may be informative enough to recognize an emotion in many cases, images may modulate the meaning, or sometimes solely convey the emotion itself (see examples in Figure 1 ). The growing popularity of vision-centered platforms like TikTok or Instagram, and lack of research on multimodal social media constitute a research gap.", "cite_spans": [ { "start": 153, "end": 169, "text": "(Mohammad, 2012;", "ref_id": "BIBREF30" }, { "start": 170, "end": 190, "text": "Schuff et al., 2017;", "ref_id": "BIBREF43" } ], "ref_spans": [ { "start": 503, "end": 511, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "With this paper, we study how users on social media make use of images and text jointly to communicate their emotion and the stimulus of that emotion. We assume that linking depictions of stimuli to the text supports emotion recognition across modalities. We study multimodal posts on the social media platform Reddit 1 , given its wide adoption, the frequently found use of images and text, and the available programming interfaces to access the data (Baumgartner et al., 2020a) . Our goal is to understand how users choose to use an image (a) joy/complementary/animal. https://www.reddit.com/r/happy/ comments/j76dog/my_everyday_joy_ is_to_see_my_adorable_cat_smiles/ Don't move to Australia unless you can handle these bad boys (b) fear/complementary/animal. https://www.reddit.com/r/WTF/ comments/k2es5l/dont_move_ to_australia_unless_you_can_ handle/ why didn't it fall (c) surprise/complementary/object. https://www.reddit.com/r/What/comments/ exh0ms/why_didnt_it_fall/ in addition to text, and the role of the relation, the emotion, and the stimulus for this decision. Further we analyze if the classification performance benefits from a joint model across modalities. Figure 1 shows examples for Reddit posts. In Figure 1a , both image and text would presumably allow to infer the correct emotion even when considered in isolation. In Figure 1b , additional knowledge of the complementary role of the picture depicting an animal can inform an emotion recognition model. In Figure 1c the image alone would not be sufficient to infer the emotion, but the text alone is.", "cite_spans": [ { "start": 452, "end": 479, "text": "(Baumgartner et al., 2020a)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 1176, "end": 1184, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 1221, "end": 1230, "text": "Figure 1a", "ref_id": "FIGREF0" }, { "start": 1343, "end": 1352, "text": "Figure 1b", "ref_id": "FIGREF0" }, { "start": 1481, "end": 1490, "text": "Figure 1c", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We therefore contribute (1) a new corpus of multimodal emotional posts from Reddit, which is annotated for authors' emotions, image-text relations, and emotion stimuli. We (2) analyze the relations of the annotated classes and find that certain emotions are likely to appear with certain relations and emotion stimuli. Further, we (3) use a transformer-based language model (pretrained RoBERTa model, Liu et al., 2019 ) and a residual neural network (Resnet50, He et al., 2016) to create classification models for the prediction of each of the three classes mentioned above. We analyze for which classification tasks multimodal models show an improvement over unimodal models. Our corpus is publicly available at https://www.ims.uni-stuttgart.de/data/mmemo.", "cite_spans": [ { "start": 401, "end": 417, "text": "Liu et al., 2019", "ref_id": "BIBREF25" }, { "start": 461, "end": 477, "text": "He et al., 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Emotion Analysis. Emotion analysis has a rich history in various domains, such as fairy tales (Alm et al., 2005) , email writing (Liu et al., 2003) , news headlines (Strapparava and Mihalcea, 2007) , or blog posts (Mihalcea and Liu, 2006; Aman and Szpakowicz, 2008; Neviarouskaya et al., 2010) . The focus of our study is on emotion analysis in social media, which has also received considerable attention (Purver and Battersby, 2012; Colneri\u010d and Dem\u0161ar, 2018; Mohammad, 2012; Schuff et al., 2017, i.a.) . Twitter 2 is a popular social media platform for emotion analysis, in both natural language processing (NLP) and computer vision. We point the reader to recent shared tasks for an overview of the methods that lead to the current state-of-the-art performance .", "cite_spans": [ { "start": 94, "end": 112, "text": "(Alm et al., 2005)", "ref_id": "BIBREF0" }, { "start": 129, "end": 147, "text": "(Liu et al., 2003)", "ref_id": "BIBREF24" }, { "start": 165, "end": 197, "text": "(Strapparava and Mihalcea, 2007)", "ref_id": "BIBREF47" }, { "start": 214, "end": 238, "text": "(Mihalcea and Liu, 2006;", "ref_id": "BIBREF29" }, { "start": 239, "end": 265, "text": "Aman and Szpakowicz, 2008;", "ref_id": "BIBREF1" }, { "start": 266, "end": 293, "text": "Neviarouskaya et al., 2010)", "ref_id": "BIBREF33" }, { "start": 406, "end": 434, "text": "(Purver and Battersby, 2012;", "ref_id": "BIBREF38" }, { "start": 435, "end": 461, "text": "Colneri\u010d and Dem\u0161ar, 2018;", "ref_id": "BIBREF8" }, { "start": 462, "end": 477, "text": "Mohammad, 2012;", "ref_id": "BIBREF30" }, { "start": 478, "end": 504, "text": "Schuff et al., 2017, i.a.)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "One of the questions that needs to be answered when developing an emotion classification system is that of the appropriate set of emotions. There are two main theories regarding emotion models in psychology that found application in NLP: discrete sets of emotions and dimensional models. Psychological models that provide discrete sets of emotions include Ekman's model of basic emotions (anger, disgust, surprise, joy, sadness, and fear, Ekman, 1992 ) and Plutchik's wheel of emotions (adding trust and anticipation, Plutchik, 1980 Plutchik, , 2001 . Dimensional models define where emotions lie in a vector space in which the dimensions have another meaning, including affect (Russell, 1980; Bradley et al., 1992) and cognitive event appraisal (Scherer, 2005; Hofmann et al., 2020; Shaikh et al., 2009) . In our study, we use the eight emotions from the Plutchik's wheel of emotions. Multimodal Analyses. The area of emotion analysis also received attention from the computer vi-sion community. A common approach is to use transfer learning from general image classifiers (He and Ding, 2019) or the analysis of facial emotion expressions, with features of muscle movement (De Silva et al., 1997) or deep learning (Li and Deng, 2020) . Dellagiacoma et al. (2011) use texture and color features to analyze social media content. Other useful properties of images for emotion analysis include the occurrence of people, faces, shapes of objects, and color distributions . Such in-depth analyses are related to stimulus detection. Peng et al. (2016) detect emotion-eliciting image regions. They show, on a Flickr image dataset, that not only objects (Wu et al., 2020) and salient regions (Zheng et al., 2017) have an impact on elicited emotions, but also contextual background. Yang et al. (2018) , inter alia, show that it is beneficial for emotion classification to explicitly integrate visual information from emotion-eliciting regions. Similarly, Fan et al. (2018) study the relationship between emotion-eliciting image content and human visual attention.", "cite_spans": [ { "start": 439, "end": 450, "text": "Ekman, 1992", "ref_id": "BIBREF11" }, { "start": 518, "end": 532, "text": "Plutchik, 1980", "ref_id": "BIBREF36" }, { "start": 533, "end": 549, "text": "Plutchik, , 2001", "ref_id": "BIBREF37" }, { "start": 678, "end": 693, "text": "(Russell, 1980;", "ref_id": "BIBREF40" }, { "start": 694, "end": 715, "text": "Bradley et al., 1992)", "ref_id": "BIBREF5" }, { "start": 746, "end": 761, "text": "(Scherer, 2005;", "ref_id": "BIBREF41" }, { "start": 762, "end": 783, "text": "Hofmann et al., 2020;", "ref_id": "BIBREF19" }, { "start": 784, "end": 804, "text": "Shaikh et al., 2009)", "ref_id": "BIBREF44" }, { "start": 1074, "end": 1093, "text": "(He and Ding, 2019)", "ref_id": "BIBREF18" }, { "start": 1174, "end": 1197, "text": "(De Silva et al., 1997)", "ref_id": "BIBREF9" }, { "start": 1215, "end": 1234, "text": "(Li and Deng, 2020)", "ref_id": "BIBREF23" }, { "start": 1646, "end": 1663, "text": "(Wu et al., 2020)", "ref_id": "BIBREF51" }, { "start": 1684, "end": 1704, "text": "(Zheng et al., 2017)", "ref_id": "BIBREF56" }, { "start": 1774, "end": 1792, "text": "Yang et al. (2018)", "ref_id": "BIBREF53" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Image-Text Relation. A set of work aimed at understanding the relation between images and text. Marsh and White (2003) establish a taxonomy of 49 functions of illustrations relative to text in US government publications. The relations contain categories like \"elicit emotion\", \"motivate\", \"explains\", or \"compares\" and \"contrasts\". Martinec and Salway (2005) aim at understanding both the role of an image and of text.", "cite_spans": [ { "start": 332, "end": 358, "text": "Martinec and Salway (2005)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "In contrast to these studies which did not develop machine learning approaches, Zhang et al. (2018) develop automatic classification methods for detection of relations between the image and a slogan in advertisments. They detect if the image and the text make the same point, if one modality is unclear without the other, if the modalities, when considered separately, imply opposing ideas, and if one of the modalities is sufficient to convey the message. Weiland et al. (2018) focus on detecting if captions of images contain complementary information. Vempala and Preo\u0163iuc-Pietro (2019) infer relationship categories between the text and image of Twitter posts to see how the meaning of the entire tweet is composed. Kruk et al. (2019) focus on understanding the intent of the author of an Instagram post and develop a hierarchy of classes, namely advocative, promotive, exhibitionist, expressive, informative, entertainment, provoca-tive/discrimination, and provocative/controversial. They also analyze the relation between the modalities with the classes divergent, additive, or parallel. Our work is similar to the two previously mentioned papers, as the detection which emotion is expressed with a post is related to intent understanding.", "cite_spans": [ { "start": 80, "end": 99, "text": "Zhang et al. (2018)", "ref_id": "BIBREF54" }, { "start": 457, "end": 478, "text": "Weiland et al. (2018)", "ref_id": "BIBREF50" }, { "start": 555, "end": 589, "text": "Vempala and Preo\u0163iuc-Pietro (2019)", "ref_id": "BIBREF48" }, { "start": 720, "end": 738, "text": "Kruk et al. (2019)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "To study the roles of images in social media posts, we create an annotated Reddit dataset with labels of emotions, text-image relations, and emotion stimuli. We first discuss our label sets and then explain the data collection and annotation procedures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Corpus Creation", "sec_num": "3" }, { "text": "We define taxonomies for the emotion, relation, and stimulus tasks. Emotion Classification. To classify social media posts in terms of what emotion the author likely felt when creating the post, we use the Plutchik's wheel of emotions as the eight labels in our annotation scheme, namely anger, anticipation, joy, sadness, trust, surprise, fear, and disgust. Relation Classification. To develop a classification scheme of relations of emotion-eliciting imagetext pairs, we randomly sampled 200 posts, and created a simple annotation environment for preliminary annotation that displayed an image-text pair next to questions to be answered (see Figure 6 in the Appendix). Based on the preliminary annotation, we propose the following set of relation categories. 1. complementary: the image is necessary to understand the author's emotion; the text alone is not sufficient but when coupled with the image, the emotion is clear; 2. illustrative: the image illustrates the text but the text alone is enough to understand the emotion; the image does not communicate the emotion on its own; 3. opposite: the image and the text pull in different directions; they are contradicting when taken separately, but when together, the emotion is clear; 4. decorative: the image is used for aesthetic purposes; the emotion is primarily communicated with the text while the image may seem unrelated; 5. emotion is communicated with image only: the text is redundant for emotion communication.", "cite_spans": [], "ref_spans": [ { "start": 644, "end": 653, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Taxonomies", "sec_num": "3.1" }, { "text": "We show examples for the complementary and illustrative relations in Figure 2 . An example for the opposite relation could be an image with an ugly creature with a text \"isn't he the prettiest thing", "cite_spans": [], "ref_spans": [ { "start": 69, "end": 77, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Taxonomies", "sec_num": "3.1" }, { "text": "I drew this (a) Relation: complemen- tary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Taxonomies", "sec_num": "3.1" }, { "text": "https://www.reddit. com/r/sad/comments/ jxgoxj/i_drew_this/ This semester has kicked me in a way none other has. Never cleaned my room until today. Forgot how big it could actually be. It's the little things (b) Relation: illustrative. https://www.reddit.com/r/ happy/comments/jwje64/this_ semester_has_kicked_me_in_ a_way_none_other/ in the world\". Posts in which the text and the image are essentially unrelated fall into the decorative category. Posts where images have inspirational texts like \"No Happiness is Ever Wasted\" and the text contains the same words would fall into the last category (image-only). Stimulus Classification. Based on the preliminary annotation procedure described for the relation taxonomy, we further obtain the following categories for emotion stimuli in images of multimodal posts: person/people, animal, object, food, meme, screenshot/text in image, art/drawing, advertisement, event/situation, and place. We provide examples of all stimuli in the Appendix in Figure 5 .", "cite_spans": [], "ref_spans": [ { "start": 994, "end": 1002, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Taxonomies", "sec_num": "3.1" }, { "text": "We collect our multimodal data from Reddit, where posts are published under specific subreddits, usercreated areas of interest, and are usually related to the topic of the group. Our data comes from 15 subreddits which we found by searching for emotion names. These subreddits are \"happy\", \"happiness\", \"sad\", \"sadness\", \"anger\", \"angry\", \"fear\", \"disgusting\", \"surprise\", \"what\", \"WTF\", \"Cringetopia\", \"MadeMeSmile\", \"woahdude\", which we complement by \"r/all\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Collection", "sec_num": "3.2" }, { "text": "We collect the data from the Pushshift Reddit Dataset, a collection of posts and comments from Reddit from 2015 (Baumgartner et al., 2020b) , with the help of the Pushshift-API 3 . We only consider posts which have both text and an image. From the initial set of instances that we collected (5,363) we manually removed those with images of low quality, pornographic and sexually inappropriate content, spam, or in a language other than English.", "cite_spans": [ { "start": 112, "end": 139, "text": "(Baumgartner et al., 2020b)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Data Collection", "sec_num": "3.2" }, { "text": "We developed the annotation task with a subsample of 400 posts in a preliminary experiment. It was performed by two groups of three students and with a direct interaction with the authors of this paper, to obtain an understandable and unambiguous formulation of the questions that we used for the actual crowdsourcing annotation. The actual annotation of 1,380 randomly sampled posts was then performed with Amazon Mechanical Turk (AMT 4 ) in two phases. In the first phase, we identify posts which likely contain an emotion by asking 1. Does the author want to express an emotion with the post? In the second phase, we collect annotations for posts which contain an emotion (we accept a post if 1/3 of the annotators marked it as emotional) and ask 2. What emotion did the author likely feel when writing this post? 3. What is the relation between the image and the text regarding emotion communication? 4. What is it in the image that triggers the emotion? For both phases/experiments, we gather annotations by three annotators. All questions allow one single answer. We show the annotation interface on Amazon Mechanical Turk for the second phase in the Appendix in Figure 7 .", "cite_spans": [], "ref_spans": [ { "start": 1169, "end": 1177, "text": "Figure 7", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Data Annotation", "sec_num": "3.3" }, { "text": "For the modelling which we describe in Section 4, we use a union of all labels from all annotators, acknowledging the subjective nature of the annotation task. This leads to multi-label classification, despite the annotation being a single-label annotation task. Quality Assurance and Annotator Prescreening. Each potential annotator must reside in a predominantly English-speaking country (Australia, Canada, Ireland, New Zealand, United Kingdom, United States), and have an AMT approval rate of at least 90 %. Further, before admitting annotators to each annotation phase, we showed them five manually selected posts that we considered to be straightforward to annotate. For each phase, annotators needed to correctly answer 80 % of the questions associated with those posts. Phase 1 had a 100 % acceptance rate; in Phase 2 this qualifica- Table 1 : Corpus statistics for emotions, relations, and stimuli. \"\u2265 1\", \"\u2265 2\", \"= 3\" means that at least one, at least two, and all three annotators labeled the post with the respective emotion respectively. The overall number of posts that were annotated in Phase 1 is 1,380, and 1,054 for Phase 2. \u03ba refers to Fleiss' kappa.", "cite_spans": [], "ref_spans": [ { "start": 842, "end": 849, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Data Annotation", "sec_num": "3.3" }, { "text": "tion test had a 55 % acceptance rate. We summarize participation and qualification statistics in Tables 4 and 5 in the Appendix. Annotators and Payment. Altogether, 75 distinct annotators participated in Phase 1, and 38 annotators worked in Phase 2. We paid $0.02 for each post in Phase 1, and $0.08 for each post in Phase 2. The average time to annotate one post was 16 and 38 seconds in Phase 1 and 2, respectively. This leads to an average overall hourly wage of $7. Overall, we paid $337.44 to annotators and $105.06 for platform fees and taxes.", "cite_spans": [], "ref_spans": [ { "start": 97, "end": 112, "text": "Tables 4 and 5", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Data Annotation", "sec_num": "3.3" }, { "text": "In total, 1,380 posts were annotated via AMT (we do not discuss the preliminary annotations here).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistics of Annotated Dataset", "sec_num": "3.4" }, { "text": "All results are summarized in Table 1 . Did the author want to express an emotion with the post? The total agreement of all three annotators (=3) was achieved in 47 % of the time (652 posts out of 1380). The overall inter-annotator agreement for this question is fair, with Fleiss \u03ba=.3.", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 37, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Statistics of Annotated Dataset", "sec_num": "3.4" }, { "text": "We consider this value to be acceptable for a prefiltering step to remove clearly non-emotional posts for the actual annotation in the next phase. Of the 1,380 posts in Phase 1, 1,061 were labeled as \"emotion\", of which seven were flagged as being problematic by annotators (see Figure 7 in Appendix). Therefore, in total, 1,054 posts are considered for Phase 2. What emotion did the author likely feel when writing this post? Table 1 gives the individual counts of instances that received a particular emotion label by at least one, two, or all three annotators. Note that the overall number of instances can be greater than the number of instances in the case that annotators disagree. Joy, surprise and disgust are the more frequent classes, with 585, 435, and 268 posts that received this label by at least one annotator. The number of posts in which at least two annotators agreed is considerably higher for joy than for the other emotions, which is also reflected in the moderate overall inter-annotator agreement with Fleiss \u03ba=.47. For most classes, the agreement is moderate, with some exceptions (anger is often conflated with disgust as we will see below, and anticipation, and trust).", "cite_spans": [], "ref_spans": [ { "start": 279, "end": 287, "text": "Figure 7", "ref_id": "FIGREF3" }, { "start": 427, "end": 434, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Statistics of Annotated Dataset", "sec_num": "3.4" }, { "text": "The agreement, however, can be considered to be similar to what has been achieved in other (crowdsourcing-based) annotation studies. As examples, Purver and Battersby (2012) report an agreement accuracy of 47 %. Schuff et al. (2017) report an agreement of less than 10 % when a set of 6 annotators needed to label an instance with the same emotion (but higher agreements for subsets of annotators). What is the relation between the image and the text regarding emotion communication? The most dominant relations in our dataset are complementary (1,042 instances in which one annotator decided for this label) and illustrative (476). There are fewer instances in which annotators marked the relation opposite (28), decorative (124) and that the text is not required to infer the emotion (142).", "cite_spans": [ { "start": 212, "end": 232, "text": "Schuff et al. (2017)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Statistics of Annotated Dataset", "sec_num": "3.4" }, { "text": "The inter-annotator agreement is low, due to the skewness of the dataset and a therefore high expected agreement: overall, we only achieve \u03ba=.04. Note that this inbalanced corpus poses a challenge in the results described in Section 5. What is it in the image that triggers the emotion? The emotion stimuli categories are more balanced: Most frequently, people comment on what we classify as screenshots (528 out of 1054 received this label by at least one annotator), followed by depictions of people (260), objects (211), pieces of art (157), and depictions of animals (146). The agreement is moderate with an overall \u03ba=.53. The labels place and advertisement are underrepresented in the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistics of Annotated Dataset", "sec_num": "3.4" }, { "text": "Cooccurrences. We now turn to the question which of the variables of the emotion category, the relation, and the stimulus category cooccur. Figure 3 shows the results with absolute counts above the diagonal, and odds-ratio values for the cooccurrence of multiple emotions annotated by different annotators below the diagonal (details regarding the calculation can be found in Schuff et al., 2017) . The emotion combinations of joy-surprise (150 times), surprise-disgust (126), surprise-anger (63), and disgust-anger (62 times) are most often used. This is presumably an effect of the fact that people share information on social media that they find newsworthy. Further, this shows the role of surprise in combination with both positive and negative emotions-as common in emotion annotations to limit ambiguity, we modelled the task in a singlelabel annotation setup. Therefore, this shows that different interpretations of the same post are possible.", "cite_spans": [ { "start": 376, "end": 396, "text": "Schuff et al., 2017)", "ref_id": "BIBREF43" } ], "ref_spans": [ { "start": 140, "end": 148, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Statistics of Annotated Dataset", "sec_num": "3.4" }, { "text": "The odds-ratio values point out the specificity of the combination of disgust-anger. This could be explained with the difference of these emotions regarding their motivational component, namely to tackle a particular stimulus or to avoid it (known as the fight-or-flight response). The combination of sadness-fear can be explained with the importance of the confirmation status of a stimulus (future or past) which distinguishes these two emotions. This property might be ambiguous in depictions in social media. The combinations of fear-anticipation and fear-trust might be considered surprising. Such combinations of positive and negative emotions frequently occur in motivational text depictions, for instance \"don't be afraid of your fears\".", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistics of Annotated Dataset", "sec_num": "3.4" }, { "text": "We show the cooccurrence counts and odds ratios for the stimulus and the emotion in Figure 4 . For the emotions anger, the stimuli of advertisments and screenshots are outstanding. Anticipation has the highest value for art. Disgust is particularly specific for food and advertisement. This shows the metaphoric use of the term (in the sense of repugnance) and a more concrete use (in the sense of revulsion). Interestingly, fear is spe- Figure 3: Emotion-Emotion cooccurrences. The values above the diagonal are absolute counts, while the numbers below the diagonal are odds ratios. I higher value denotes that the combination is particular specific.", "cite_spans": [], "ref_spans": [ { "start": 84, "end": 92, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Statistics of Annotated Dataset", "sec_num": "3.4" }, { "text": "cific for stimuli of animals, art, and memes. Joy is the only emotion that has a high odds ratio with places, and persons, but also with animals. Sadness and trust have the highest value for memes. We do not discuss the relation category further, given the predominance of the complementary class and its limited inter-annotator agreement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistics of Annotated Dataset", "sec_num": "3.4" }, { "text": "In the following, we present the models that we used to predict (1) each variable (emotion, stimulus, relation) separately in each modality, and (2) across modalities with joint models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Methods", "sec_num": "4" }, { "text": "For the text-based model, we fine-tune the pretrained RoBERTa model 5 (Liu et al., 2019) . We perform multi-task learning for emotion, stimulus and relation by adding a fully connected layer (for each set of labels), on top of the last hidden layer. The model combines the loss for all three sets of labels and updates the weights accordingly during the training phase. 6 We use a learning rate of 3 \u2022 10 \u22125 for all layers, except for the top three fully connected ones (3 \u2022 10 \u22123 ). We use the learning rate scheduler with a step size of 5 and train for maximally 20 epochs, but perform early stopping if the validation loss does not improve by more than 0.005%. ", "cite_spans": [ { "start": 70, "end": 88, "text": "(Liu et al., 2019)", "ref_id": "BIBREF25" }, { "start": 370, "end": 371, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Text", "sec_num": "4.1" }, { "text": "We built the image-based model on top of a pretrained deep residual network model with 48 convolutional layers and 2 pooling layers (ResNet, He et al., 2016) . We use the ResNet50 that is provided by PyTorch 7 and was pretrained on 1,000 ImageNet categories (Russakovsky et al., 2015; Deng et al., 2009) . As with the text-based model, we add three fully connected layers on top of the fully connected layer of the ResNet50 model, with the sigmoid activation function. Unlike RoBERTa, we do not fine-tune the convolutional layers to prevent the pre-trained weights to change. 8", "cite_spans": [ { "start": 132, "end": 157, "text": "(ResNet, He et al., 2016)", "ref_id": null }, { "start": 258, "end": 284, "text": "(Russakovsky et al., 2015;", "ref_id": "BIBREF39" }, { "start": 285, "end": 303, "text": "Deng et al., 2009)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Images", "sec_num": "4.2" }, { "text": "We evaluate three simple multimodal methods which combine the information from the text and the image modality on the traditional three different stages: early, late, and model-based fusion (Snoek et al., 2005) . In early (feature-based) fusion, the features extracted from both modalities are fused at an early stage and passed through a classifier. As the input, our early-fusion model takes the tokenized text and preprocessed image (images are resized, converted to tensors, and normalized by the mean and standard deviation 9 ), and concatenates them into one vector to pass through the final classifier, that consists of several layers (three linear, dropout, and three fully connected layers) with the input size depending on the longest text in the training set and output size depending on the task. The activation function is, as in all our models, a sigmoid function.", "cite_spans": [ { "start": 190, "end": 210, "text": "(Snoek et al., 2005)", "ref_id": "BIBREF45" } ], "ref_spans": [], "eq_spans": [], "section": "Joint Models", "sec_num": "4.3" }, { "text": "In late (decision-based) fusion, classification scores are obtained for each modality separately. These scores are then fed into the joint model. In our late-fusion model, we pass the text and image through the text-based and image-based models respectively, and concatenate the output probabilities of these models. 10 We then pass this vector through a fully connected layer with twice the number of classes from the two models as input and output, and apply sigmoid for prediction. That is, for the emotion classification, the vectors of eight labels from RoBERTa and ResNet50, summing up to 16, are passed to the fully connected layer.", "cite_spans": [ { "start": 317, "end": 319, "text": "10", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Joint Models", "sec_num": "4.3" }, { "text": "For model-based fusion, we extract text and image features from our unimodal text and imagebased classifiers, respectively (from the last hidden layers before the fully conntected ones), and feed these to a final classifier. 11", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Models", "sec_num": "4.3" }, { "text": "We evaluate our models on predicting emotions, text-image relations, and emotion stimuli using unimodal and multimodal models, based on the F 1 measure. We use the dataset of 1054 instances in which we aggregate the labels from the three annotators by accepting a label if one annotator assigned it (this approach might be considered a \"high-recall\" aggregation of the labels, similar to Schuff et al. (2017) ). Despite being a single-label annotation task, this leads to a multi-label classification setup.", "cite_spans": [ { "start": 388, "end": 408, "text": "Schuff et al. (2017)", "ref_id": "BIBREF43" } ], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "In other words, the annotation process requires annotators to select a single label (for each set of labels), e.g. one emotion per post; however, the experiments are conducted using multiple labels per set, depending on how many labels are given by three annotators for each set of labels. The data is randomly split into 853 instances for training, 95 instances for validation, and 106 test instances. Table 2 summarizes the results, averaging across the values for each class variable. We observe that the emotions and the relations can be predicted with the highest F 1 with the text-based unimodal model. The discrepancy to the image-based model is substantial, with .53 to .41 for the emotions and .77 to .67 for the relations. The stimulus detection benefits from the multimodal information from both the image and the text-the highest performance, .63, is achieved with the model-based fusion approach. From the unimodal models, the image-based model is performing better than the text-based model. This is not surprising-in multimodal social media posts that express an emotion, the depictions predominantly correspond to a stimulus, or their identification is at least important. The corpus statistics show that: posts in which the image is purely used decoratively are the minority. Table 3 shows detailed per-label results. For the emotion classification task, we see that for three emotions, the text-only model leads to the best performance (disgust, joy, trust, while the latter is too low to draw a conclusion regarding the importance of the modalities). The other emotions benefit from a multimodal approach. Overall, still, the text- based model shows highest average performance, given the dominance of the emotion joy. For most stimulus categories, either the image or a multimodal model performs best. This is not surprising, given that the stimulus is often depicted in the visual part of a multimodal post. More complex depictions that could receive various evaluations, like art, events/situations, and memes require multimodal information. In those, the image information alone is not sufficient-the performance difference is between 22pp and 13pp in F 1 . For those stimuli, in which the text-based model outperforms the multimodal models, the difference is lower. The text-based model is never performing best, but shows acceptable performance for animals, memes, screenshots and person depictions.", "cite_spans": [], "ref_spans": [ { "start": 403, "end": 410, "text": "Table 2", "ref_id": "TABREF4" }, { "start": 1293, "end": 1300, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "Regarding the relations, the complementary class is predicted with the best performance; which is due to the frequency of this class. The label decorative can only be predicted with a (slightly) acceptable performance with the multimodal approach, while illustrative predictions based on text-only are nearly en par with a multimodal model. From the three multimodal fusion approaches, early fusion performs the worst, followed by late fusion. Model-based fusion most often leads to the best result. We show examples for instances in which the multimodal model performs better than unimodal models in Table 6 in the Appendix.", "cite_spans": [], "ref_spans": [ { "start": 601, "end": 608, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5" }, { "text": "With this paper, we presented the first study on how users in social media make use of text and images to communicate their emotions. We have seen that the number of multimodal posts in which the image does not contribute additional information over the text is in the minority, and, hence, interpretation of images in addition to the text is important. While the inter-annotator agreement for relation was not reliable enough to draw this conclusion, prediction of stimulus correlates with prediction of emotion due to the information that is present in the image but missing in the text, and thus makes images play a significant role in analysis of social media posts. This is also the first study on stimulus detection in multimodal posts, and we have seen that for the majority of stimulus categories, the information in the text is not sufficient. In contrast to most work on emotion stimulus and cause detection in NLP, we treated this task as a discrete classification task, similar to early work in targeted sentiment analysis. An interesting step in the future will be to join segment-based open domain stimulus detection, as it is common in text analysis, with region-based image analysis, and ground the textual references in the image. This will allow to go beyond predefined categories. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "Description Qualification test 1: emotional/nonemotional posts 5 posts presented to annotators to label the post emotional or non-emotional; passing score of 80% Qualification test 2: emotion, relation, stimulus identification 5 posts presented to annotators to label the post for emotions, relations, and stimuli; passing score of 80% Region Annotators must reside in either of the six English-speaking countries (Australia, Canada, Ireland, New Zealand, United Kingdom, United States) to force the task to be done by native speakers. Human Intelligence Task (HIT) approval rate The HIT approval rate represents the proportion of completed tasks that are approved by Requesters and ensures the quality of the job workers do on the platform. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Qualification", "sec_num": null }, { "text": "Participation Attempted Passed From previous task New Task 1 75 75 -75 Task 2 69 38 17 21 Table 5 : Statistics on participation for the two tasks. All numbers are the counts of workers. Qualification tests are described in Table 4 . Attempted are the number of workers that took the qualification test, while passed is the number of workers that answered at least 80% of the questions correctly. From previous task refers to the number of workers that participated in Phase 1 as well as Phase 2, while new are the participants that have not participated in the previous phase.", "cite_spans": [], "ref_spans": [ { "start": 54, "end": 107, "text": "Task 1 75 75 -75 Task 2 69 38 17 21 Table 5", "ref_id": "TABREF4" }, { "start": 233, "end": 240, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Qualification", "sec_num": null }, { "text": "https://www.reddit.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.twitter.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.github.com/pushshift/api", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://www.mturk.com/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://huggingface.co/transformers/model_doc/roberta.html 6 Our first choice of only one layer performed en par to multiple stacked layers.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://pytorch.org/hub/pytorch_vision_resnet/8 We performed experiments with unfreezing several top convolutional layers, however, it did not lead to better results. 9 https://pytorch.org/vision/stable/transforms.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Experiments with summed vectors did not improve results.11 Experiments with more complex models with multiple top layers did not improve results, thus, we chose a single-layeron-top model for the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Michela Dellagiacoma, Pamela Zontone, Giulia Boato, and Liliana Albertazzi. 2011. Emotion based classification of natural images. In Proceedings of the 2011 international workshop on DETecting and Exploiting Cultural diversiTy on the social web, DE-TECT '11, page 17-22, New York, NY, USA. Association for Computing Machinery.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was supported by Deutsche Forschungsgemeinschaft (project CEAT, KL 2869/1-2).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "I lost my smile for a while. Just felt happy today first time in a long time. Table 6 : Examples in which the multimodal model-based model returns the correct result, but at least one unimodal model does not. \"-\" means that the model was not confident enough to predict any of the labels from the set.", "cite_spans": [], "ref_spans": [ { "start": 78, "end": 85, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "A Appendix", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Emotions from text: Machine learning for text-based emotion prediction", "authors": [ { "first": "Cecilia", "middle": [], "last": "Ovesdotter Alm", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2005, "venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "579--586", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: Machine learning for text-based emotion prediction. In Proceed- ings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 579-586, Vancouver, British Columbia, Canada. Association for Compu- tational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Using Roget's thesaurus for fine-grained emotion recognition", "authors": [ { "first": "Saima", "middle": [], "last": "Aman", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Third International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saima Aman and Stan Szpakowicz. 2008. Using Ro- get's thesaurus for fine-grained emotion recognition. In Proceedings of the Third International Joint Con- ference on Natural Language Processing: Volume-I.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "The pushshift reddit dataset", "authors": [ { "first": "Jason", "middle": [], "last": "Baumgartner", "suffix": "" }, { "first": "Savvas", "middle": [], "last": "Zannettou", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Keegan", "suffix": "" }, { "first": "Megan", "middle": [], "last": "Squire", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Blackburn", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the International AAAI Conference on Web and Social Media", "volume": "14", "issue": "", "pages": "830--839", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020a. The pushshift reddit dataset. In Proceedings of the Inter- national AAAI Conference on Web and Social Media, volume 14, pages 830-839.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The pushshift reddit dataset", "authors": [ { "first": "Jason", "middle": [], "last": "Baumgartner", "suffix": "" }, { "first": "Savvas", "middle": [], "last": "Zannettou", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Keegan", "suffix": "" }, { "first": "Megan", "middle": [], "last": "Squire", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Blackburn", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the international AAAI conference on web and social media", "volume": "14", "issue": "", "pages": "830--839", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jason Baumgartner, Savvas Zannettou, Brian Keegan, Megan Squire, and Jeremy Blackburn. 2020b. The pushshift reddit dataset. In Proceedings of the inter- national AAAI conference on web and social media, volume 14, pages 830-839.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "GoodNewsEveryone: A corpus of news headlines annotated with emotions, semantic roles, and reader perception", "authors": [ { "first": "Laura", "middle": [ "Ana" ], "last": "", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Bostan", "suffix": "" }, { "first": "Evgeny", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2020, "venue": "Proceedings of The 12th Language Resources and Evaluation Conference", "volume": "", "issue": "", "pages": "1554--1566", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Ana Maria Bostan, Evgeny Kim, and Roman Klinger. 2020. GoodNewsEveryone: A corpus of news headlines annotated with emotions, semantic roles, and reader perception. In Proceedings of The 12th Language Resources and Evaluation Con- ference, pages 1554-1566, Marseille, France. Euro- pean Language Resources Association.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Remembering pictures: pleasure and arousal in memory", "authors": [ { "first": "M", "middle": [], "last": "Margaret", "suffix": "" }, { "first": "", "middle": [], "last": "Bradley", "suffix": "" }, { "first": "K", "middle": [], "last": "Mark", "suffix": "" }, { "first": "Margaret", "middle": [ "C" ], "last": "Greenwald", "suffix": "" }, { "first": "Peter J", "middle": [], "last": "Petry", "suffix": "" }, { "first": "", "middle": [], "last": "Lang", "suffix": "" } ], "year": 1992, "venue": "Journal of experimental psychology: Learning, Memory, and Cognition", "volume": "18", "issue": "2", "pages": "", "other_ids": { "DOI": [ "10.1037//0278-7393.18.2.379" ] }, "num": null, "urls": [], "raw_text": "Margaret M Bradley, Mark K Greenwald, Margaret C Petry, and Peter J Lang. 1992. Remembering pic- tures: pleasure and arousal in memory. Journal of experimental psychology: Learning, Memory, and Cognition, 18(2):379.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "EmoBank: Studying the impact of annotation perspective and representation format on dimensional emotion analysis", "authors": [ { "first": "Sven", "middle": [], "last": "Buechel", "suffix": "" }, { "first": "Udo", "middle": [], "last": "Hahn", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 15th Conference of the European Chapter", "volume": "2", "issue": "", "pages": "578--585", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sven Buechel and Udo Hahn. 2017. EmoBank: Study- ing the impact of annotation perspective and repre- sentation format on dimensional emotion analysis. In Proceedings of the 15th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 578-585, Valencia, Spain. Association for Computational Lin- guistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Emotion recognition under consideration of the emotion component process model", "authors": [ { "first": "Felix", "middle": [], "last": "Casel", "suffix": "" }, { "first": "Amelie", "middle": [], "last": "Heindl", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021)", "volume": "", "issue": "", "pages": "49--61", "other_ids": {}, "num": null, "urls": [], "raw_text": "Felix Casel, Amelie Heindl, and Roman Klinger. 2021. Emotion recognition under consideration of the emo- tion component process model. In Proceedings of the 17th Conference on Natural Language Process- ing (KONVENS 2021), pages 49-61, D\u00fcsseldorf, Germany. KONVENS 2021 Organizers.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Emotion recognition on twitter: Comparative study and training a unison model", "authors": [ { "first": "Niko", "middle": [], "last": "Colneri\u010d", "suffix": "" }, { "first": "Janez", "middle": [], "last": "Dem\u0161ar", "suffix": "" } ], "year": 2018, "venue": "IEEE transactions on affective computing", "volume": "11", "issue": "3", "pages": "433--446", "other_ids": { "DOI": [ "10.1109/TAFFC.2018.2807817" ] }, "num": null, "urls": [], "raw_text": "Niko Colneri\u010d and Janez Dem\u0161ar. 2018. Emotion recognition on twitter: Comparative study and train- ing a unison model. IEEE transactions on affective computing, 11(3):433-446.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Facial emotion recognition using multi-modal information", "authors": [ { "first": "C", "middle": [ "De" ], "last": "Liyanage", "suffix": "" }, { "first": "Tsutomu", "middle": [], "last": "Silva", "suffix": "" }, { "first": "Ryohei", "middle": [], "last": "Miyasato", "suffix": "" }, { "first": "", "middle": [], "last": "Nakatsu", "suffix": "" } ], "year": 1997, "venue": "Proceedings of ICICS, 1997 International Conference on Information, Communications and Signal Processing", "volume": "1", "issue": "", "pages": "397--401", "other_ids": { "DOI": [ "10.1109/ICICS.1997.647126" ] }, "num": null, "urls": [], "raw_text": "Liyanage C. De Silva, Tsutomu Miyasato, and Ry- ohei Nakatsu. 1997. Facial emotion recognition using multi-modal information. In Proceedings of ICICS, 1997 International Conference on In- formation, Communications and Signal Processing. Theme: Trends in Information Systems Engineer- ing and Wireless Multimedia Communications (Cat., volume 1, pages 397-401 vol.1.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Imagenet: A large-scale hierarchical image database", "authors": [ { "first": "Jia", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Dong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" }, { "first": "Li-Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Li", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2009, "venue": "2009 IEEE Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "248--255", "other_ids": { "DOI": [ "10.1109/CVPR.2009.5206848" ] }, "num": null, "urls": [], "raw_text": "Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hier- archical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An argument for basic emotions", "authors": [ { "first": "Paul", "middle": [], "last": "Ekman", "suffix": "" } ], "year": 1992, "venue": "Cognition & emotion", "volume": "6", "issue": "3-4", "pages": "169--200", "other_ids": { "DOI": [ "10.1080/02699939208411068" ] }, "num": null, "urls": [], "raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition & emotion, 6(3-4):169-200.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Emotional attention: A study of image sentiment and visual attention", "authors": [ { "first": "Zhiqi", "middle": [], "last": "Shaojing Fan", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Bryan", "middle": [ "L" ], "last": "Jiang", "suffix": "" }, { "first": "Juan", "middle": [], "last": "Koenig", "suffix": "" }, { "first": "Mohan", "middle": [ "S" ], "last": "Xu", "suffix": "" }, { "first": "Qi", "middle": [], "last": "Kankanhalli", "suffix": "" }, { "first": "", "middle": [], "last": "Zhao", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "7521--7531", "other_ids": { "DOI": [ "10.1109/CVPR.2018.00785" ] }, "num": null, "urls": [], "raw_text": "Shaojing Fan, Zhiqi Shen, Ming Jiang, Bryan L. Koenig, Juan Xu, Mohan S. Kankanhalli, and Qi Zhao. 2018. Emotional attention: A study of image sentiment and visual attention. In 2018 IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 7521-7531.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Overview of NTCIR-13 ECA task", "authors": [ { "first": "Qinghong", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jiannan", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Ruifeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Gui", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Kam-Fai", "middle": [], "last": "Wong", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 13th NTCIR Conference on Evaluation of Information Access Technologies", "volume": "", "issue": "", "pages": "361--366", "other_ids": {}, "num": null, "urls": [], "raw_text": "Qinghong Gao, Jiannan Hu, Ruifeng Xu, Gui Lin, Yulan He, Qin Lu, and Kam-Fai Wong. 2017. Overview of NTCIR-13 ECA task. In Proceed- ings of the 13th NTCIR Conference on Evaluation of Information Access Technologies, pages 361-366, Tokyo, Japan.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Detecting emotion stimuli in emotion-bearing sentences", "authors": [ { "first": "Diman", "middle": [], "last": "Ghazi", "suffix": "" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2015, "venue": "International Conference on Intelligent Text Processing and Computational Linguistics", "volume": "", "issue": "", "pages": "152--165", "other_ids": { "DOI": [ "10.1007/978-3-319-18117-2_12" ] }, "num": null, "urls": [], "raw_text": "Diman Ghazi, Diana Inkpen, and Stan Szpakowicz. 2015. Detecting emotion stimuli in emotion-bearing sentences. In International Conference on Intelli- gent Text Processing and Computational Linguistics, pages 152-165. Springer.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "A question answering approach for emotion cause extraction", "authors": [ { "first": "Lin", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Jiannan", "middle": [], "last": "Hu", "suffix": "" }, { "first": "Yulan", "middle": [], "last": "He", "suffix": "" }, { "first": "Ruifeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Jiachen", "middle": [], "last": "Du", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1593--1602", "other_ids": { "DOI": [ "10.18653/v1/D17-1167" ] }, "num": null, "urls": [], "raw_text": "Lin Gui, Jiannan Hu, Yulan He, Ruifeng Xu, Qin Lu, and Jiachen Du. 2017. A question answering ap- proach for emotion cause extraction. In Proceed- ings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1593-1602, Copenhagen, Denmark. Association for Computa- tional Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Event-driven emotion cause extraction with corpus construction", "authors": [ { "first": "Lin", "middle": [], "last": "Gui", "suffix": "" }, { "first": "Dongyin", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Ruifeng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Qin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1639--1649", "other_ids": { "DOI": [ "10.18653/v1/D16-1170" ] }, "num": null, "urls": [], "raw_text": "Lin Gui, Dongyin Wu, Ruifeng Xu, Qin Lu, and Yu Zhou. 2016. Event-driven emotion cause extrac- tion with corpus construction. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1639-1649, Austin, Texas. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Deep residual learning for image recognition", "authors": [ { "first": "Kaiming", "middle": [], "last": "He", "suffix": "" }, { "first": "Xiangyu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Shaoqing", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Sun", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the IEEE conference on computer vision and pattern recognition", "volume": "", "issue": "", "pages": "770--778", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Deep transfer learning for image emotion analysis: Reducing marginal and joint distribution discrepancies together", "authors": [ { "first": "Yuwei", "middle": [], "last": "He", "suffix": "" }, { "first": "Guiguang", "middle": [], "last": "Ding", "suffix": "" } ], "year": 2019, "venue": "Neural Processing Letters", "volume": "3", "issue": "", "pages": "2077--2086", "other_ids": { "DOI": [ "10.1007/s11063-019-10035-7" ] }, "num": null, "urls": [], "raw_text": "Yuwei He and Guiguang Ding. 2019. Deep trans- fer learning for image emotion analysis: Reduc- ing marginal and joint distribution discrepancies to- gether. Neural Processing Letters, 3:2077-2086.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Appraisal theories for emotion classification in text", "authors": [ { "first": "Jan", "middle": [], "last": "Hofmann", "suffix": "" }, { "first": "Enrica", "middle": [], "last": "Troiano", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Sassenberg", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 28th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "125--138", "other_ids": { "DOI": [ "10.18653/v1/2020.coling-main.11" ] }, "num": null, "urls": [], "raw_text": "Jan Hofmann, Enrica Troiano, Kai Sassenberg, and Ro- man Klinger. 2020. Appraisal theories for emotion classification in text. In Proceedings of the 28th In- ternational Conference on Computational Linguis- tics, pages 125-138, Barcelona, Spain (Online). In- ternational Committee on Computational Linguis- tics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Who feels what and why? Annotation of a literature corpus with semantic roles of emotions", "authors": [ { "first": "Evgeny", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1345--1359", "other_ids": {}, "num": null, "urls": [], "raw_text": "Evgeny Kim and Roman Klinger. 2018. Who feels what and why? Annotation of a literature corpus with semantic roles of emotions. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1345-1359. Association for Com- putational Linguistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "IEST: WASSA-2018 implicit emotions shared task", "authors": [ { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" }, { "first": "Orph\u00e9e", "middle": [], "last": "De Clercq", "suffix": "" }, { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Balahur", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "31--42", "other_ids": { "DOI": [ "10.18653/v1/W18-6206" ] }, "num": null, "urls": [], "raw_text": "Roman Klinger, Orph\u00e9e De Clercq, Saif Mohammad, and Alexandra Balahur. 2018. IEST: WASSA-2018 implicit emotions shared task. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 31-42, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Integrating text and image: Determining multimodal document intent in Instagram posts", "authors": [ { "first": "Julia", "middle": [], "last": "Kruk", "suffix": "" }, { "first": "Jonah", "middle": [], "last": "Lubin", "suffix": "" }, { "first": "Karan", "middle": [], "last": "Sikka", "suffix": "" }, { "first": "Xiao", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "Ajay", "middle": [], "last": "Divakaran", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "4622--4632", "other_ids": { "DOI": [ "10.18653/v1/D19-1469" ] }, "num": null, "urls": [], "raw_text": "Julia Kruk, Jonah Lubin, Karan Sikka, Xiao Lin, Dan Jurafsky, and Ajay Divakaran. 2019. Integrating text and image: Determining multimodal document intent in Instagram posts. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4622-4632, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Deep facial expression recognition: A survey", "authors": [ { "first": "Shan", "middle": [], "last": "Li", "suffix": "" }, { "first": "Weihong", "middle": [], "last": "Deng", "suffix": "" } ], "year": 2020, "venue": "IEEE Transactions on Affective Computing", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1109/TAFFC.2020.2981446" ] }, "num": null, "urls": [], "raw_text": "Shan Li and Weihong Deng. 2020. Deep facial expres- sion recognition: A survey. IEEE Transactions on Affective Computing.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A model of textual affect sensing using real-world knowledge", "authors": [ { "first": "Hugo", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Henry", "middle": [], "last": "Lieberman", "suffix": "" }, { "first": "Ted", "middle": [], "last": "Selker", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 8th International Conference on Intelligent User Interfaces, IUI '03", "volume": "", "issue": "", "pages": "125--132", "other_ids": { "DOI": [ "10.1145/604045.604067" ] }, "num": null, "urls": [], "raw_text": "Hugo Liu, Henry Lieberman, and Ted Selker. 2003. A model of textual affect sensing using real-world knowledge. In Proceedings of the 8th International Conference on Intelligent User Interfaces, IUI '03, page 125-132, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Roberta: A robustly optimized bert pretraining approach", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Jingfei", "middle": [], "last": "Du", "suffix": "" }, { "first": "Mandar", "middle": [], "last": "Joshi", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.11692" ] }, "num": null, "urls": [], "raw_text": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "On shape and the computability of emotions", "authors": [ { "first": "Xin", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Poonam", "middle": [], "last": "Suryanarayan", "suffix": "" }, { "first": "Reginald", "middle": [ "B" ], "last": "Adams", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michelle", "middle": [ "G" ], "last": "Newman", "suffix": "" }, { "first": "James Z", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2012, "venue": "ACM International Multimedia Conference", "volume": "2012", "issue": "", "pages": "229--238", "other_ids": { "DOI": [ "10.1145/2393347.2393384" ] }, "num": null, "urls": [], "raw_text": "Xin Lu, Poonam Suryanarayan, Reginald B Adams, Jia Li, Michelle G Newman, and James Z Wang. 2012. On shape and the computability of emotions. In ACM International Multimedia Conference, volume 2012, pages 229-238.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A taxonomy of relationships between images and text", "authors": [ { "first": "E", "middle": [], "last": "Emily", "suffix": "" }, { "first": "Marilyn", "middle": [ "Domas" ], "last": "Marsh", "suffix": "" }, { "first": "", "middle": [], "last": "White", "suffix": "" } ], "year": 2003, "venue": "Journal of Documentation", "volume": "59", "issue": "6", "pages": "647--672", "other_ids": { "DOI": [ "10.1108/00220410310506303" ] }, "num": null, "urls": [], "raw_text": "Emily E Marsh and Marilyn Domas White. 2003. A taxonomy of relationships between images and text. Journal of Documentation, 59(6):647-672.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "A system for image-text relations in new (and old) media. Visual communication", "authors": [ { "first": "Radan", "middle": [], "last": "Martinec", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Salway", "suffix": "" } ], "year": 2005, "venue": "", "volume": "4", "issue": "", "pages": "337--371", "other_ids": { "DOI": [ "10.1177/1470357205055928" ] }, "num": null, "urls": [], "raw_text": "Radan Martinec and Andrew Salway. 2005. A system for image-text relations in new (and old) media. Vi- sual communication, 4(3):337-371.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A corpus-based approach to finding happiness", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2006, "venue": "AAAI Spring Symposium: Computational Approaches to Analyzing Weblogs", "volume": "", "issue": "", "pages": "139--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Hugo Liu. 2006. A corpus-based approach to finding happiness. In AAAI Spring Sym- posium: Computational Approaches to Analyzing Weblogs, pages 139-144.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "#Emotional Tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" } ], "year": 2012, "venue": "*SEM 2012: The First Joint Conference on Lexical and Computational Semantics", "volume": "1", "issue": "", "pages": "246--255", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saif Mohammad. 2012. #Emotional Tweets. In *SEM 2012: The First Joint Conference on Lexical and Computational Semantics -Volume 1: Proceedings of the main conference and the shared task, and Vol- ume 2: Proceedings of the Sixth International Work- shop on Semantic Evaluation (SemEval 2012), pages 246-255, Montr\u00e9al, Canada. Association for Com- putational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "SemEval-2018 task 1: Affect in tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Felipe", "middle": [], "last": "Bravo-Marquez", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Salameh", "suffix": "" }, { "first": "Svetlana", "middle": [], "last": "Kiritchenko", "suffix": "" } ], "year": 2018, "venue": "Proceedings of The 12th International Workshop on Semantic Evaluation", "volume": "", "issue": "", "pages": "1--17", "other_ids": { "DOI": [ "10.18653/v1/S18-1001" ] }, "num": null, "urls": [], "raw_text": "Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval- 2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Eval- uation, pages 1-17, New Orleans, Louisiana. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Semantic role labeling of emotions in tweets", "authors": [ { "first": "Saif", "middle": [], "last": "Mohammad", "suffix": "" }, { "first": "Xiaodan", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Martin", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "32--41", "other_ids": { "DOI": [ "10.3115/v1/W14-2607" ] }, "num": null, "urls": [], "raw_text": "Saif Mohammad, Xiaodan Zhu, and Joel Martin. 2014. Semantic role labeling of emotions in tweets. In Pro- ceedings of the 5th Workshop on Computational Ap- proaches to Subjectivity, Sentiment and Social Me- dia Analysis, pages 32-41, Baltimore, Maryland. As- sociation for Computational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Recognition of Fine-Grained Emotions from Text: An Approach Based on the Compositionality Principle", "authors": [ { "first": "Alena", "middle": [], "last": "Neviarouskaya", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Prendinger", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "179--207", "other_ids": { "DOI": [ "10.1007/978-3-642-12604-8_9" ] }, "num": null, "urls": [], "raw_text": "Alena Neviarouskaya, Helmut Prendinger, and Mitsuru Ishizuka. 2010. Recognition of Fine-Grained Emo- tions from Text: An Approach Based on the Composi- tionality Principle, pages 179-207. Springer Berlin Heidelberg, Berlin, Heidelberg.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Token sequence labeling vs. clause classification for English emotion stimulus detection", "authors": [ { "first": "Laura", "middle": [ "Ana" ], "last": "", "suffix": "" }, { "first": "Maria", "middle": [], "last": "Oberl\u00e4nder", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics", "volume": "", "issue": "", "pages": "58--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laura Ana Maria Oberl\u00e4nder and Roman Klinger. 2020. Token sequence labeling vs. clause classification for English emotion stimulus detection. In Proceed- ings of the Ninth Joint Conference on Lexical and Computational Semantics, pages 58-70, Barcelona, Spain (Online). Association for Computational Lin- guistics.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Where do emotions come from? Predicting the emotion stimuli map", "authors": [ { "first": "", "middle": [], "last": "Kuan-Chuan", "suffix": "" }, { "first": "Amir", "middle": [], "last": "Peng", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Sadovnik", "suffix": "" }, { "first": "Tsuhan", "middle": [], "last": "Gallagher", "suffix": "" }, { "first": "", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2016, "venue": "2016 IEEE International Conference on Image Processing (ICIP)", "volume": "", "issue": "", "pages": "614--618", "other_ids": { "DOI": [ "10.1109/ICIP.2016.7532430" ] }, "num": null, "urls": [], "raw_text": "Kuan-Chuan Peng, Amir Sadovnik, Andrew Gallagher, and Tsuhan Chen. 2016. Where do emotions come from? Predicting the emotion stimuli map. In 2016 IEEE International Conference on Image Process- ing (ICIP), pages 614-618.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A general psychoevolutionary theory of emotion", "authors": [ { "first": "Robert", "middle": [], "last": "Plutchik", "suffix": "" } ], "year": 1980, "venue": "Theories of emotion", "volume": "", "issue": "", "pages": "3--33", "other_ids": { "DOI": [ "10.1016/B978-0-12-558701-3.50007-7" ] }, "num": null, "urls": [], "raw_text": "Robert Plutchik. 1980. A general psychoevolutionary theory of emotion. In Theories of emotion, pages 3-33. Elsevier.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "The nature of emotions: Human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice", "authors": [ { "first": "Robert", "middle": [], "last": "Plutchik", "suffix": "" } ], "year": 2001, "venue": "American scientist", "volume": "89", "issue": "4", "pages": "344--350", "other_ids": {}, "num": null, "urls": [], "raw_text": "Robert Plutchik. 2001. The nature of emotions: Hu- man emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. American scientist, 89(4):344- 350.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Experimenting with distant supervision for emotion classification", "authors": [ { "first": "Matthew", "middle": [], "last": "Purver", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Battersby", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "482--491", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Purver and Stuart Battersby. 2012. Experi- menting with distant supervision for emotion classi- fication. In Proceedings of the 13th Conference of the European Chapter of the Association for Com- putational Linguistics, pages 482-491, Avignon, France. Association for Computational Linguistics.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Ima-geNet Large Scale Visual Recognition Challenge", "authors": [ { "first": "Olga", "middle": [], "last": "Russakovsky", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Su", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Krause", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Satheesh", "suffix": "" }, { "first": "Sean", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Zhiheng", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Andrej", "middle": [], "last": "Karpathy", "suffix": "" }, { "first": "Aditya", "middle": [], "last": "Khosla", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Bernstein", "suffix": "" }, { "first": "Alexander", "middle": [ "C" ], "last": "Berg", "suffix": "" }, { "first": "Li", "middle": [], "last": "Fei-Fei", "suffix": "" } ], "year": 2015, "venue": "International Journal of Computer Vision (IJCV)", "volume": "115", "issue": "3", "pages": "211--252", "other_ids": { "DOI": [ "10.1007/s11263-015-0816-y" ] }, "num": null, "urls": [], "raw_text": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An- drej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. 2015. Ima- geNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211-252.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "A circumplex model of affect", "authors": [ { "first": "A", "middle": [], "last": "James", "suffix": "" }, { "first": "", "middle": [], "last": "Russell", "suffix": "" } ], "year": 1980, "venue": "Journal of personality and social psychology", "volume": "39", "issue": "6", "pages": "", "other_ids": { "DOI": [ "10.1037/h0077714" ] }, "num": null, "urls": [], "raw_text": "James A Russell. 1980. A circumplex model of af- fect. Journal of personality and social psychology, 39(6):1161.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "What are emotions? And how can they be measured?", "authors": [ { "first": "R", "middle": [], "last": "Klaus", "suffix": "" }, { "first": "", "middle": [], "last": "Scherer", "suffix": "" } ], "year": 2005, "venue": "Social Science Information", "volume": "44", "issue": "4", "pages": "695--729", "other_ids": { "DOI": [ "10.1177/0539018405058216" ] }, "num": null, "urls": [], "raw_text": "Klaus R. Scherer. 2005. What are emotions? And how can they be measured? Social Science Information, 44(4):695-729.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Appraisal processes in emotion: Theory, methods, research", "authors": [ { "first": "Klaus", "middle": [ "R" ], "last": "Scherer", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Schorr", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Johnstone", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klaus R. Scherer, Angela Schorr, and Tom Johnstone. 2001. Appraisal processes in emotion: Theory, methods, research. Oxford University Press.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "Annotation, modelling and analysis of fine-grained emotions on a stance and sentiment detection corpus", "authors": [ { "first": "Hendrik", "middle": [], "last": "Schuff", "suffix": "" }, { "first": "Jeremy", "middle": [], "last": "Barnes", "suffix": "" }, { "first": "Julian", "middle": [], "last": "Mohme", "suffix": "" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "Roman", "middle": [], "last": "Klinger", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 8th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis", "volume": "", "issue": "", "pages": "13--23", "other_ids": { "DOI": [ "10.18653/v1/W17-5203" ] }, "num": null, "urls": [], "raw_text": "Hendrik Schuff, Jeremy Barnes, Julian Mohme, Sebas- tian Pad\u00f3, and Roman Klinger. 2017. Annotation, modelling and analysis of fine-grained emotions on a stance and sentiment detection corpus. In Pro- ceedings of the 8th Workshop on Computational Ap- proaches to Subjectivity, Sentiment and Social Me- dia Analysis, pages 13-23, Copenhagen, Denmark. Association for Computational Linguistics.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "A linguistic interpretation of the occ emotion model for affect sensing from text", "authors": [ { "first": "Mostafa Al Masum", "middle": [], "last": "Shaikh", "suffix": "" }, { "first": "Helmut", "middle": [], "last": "Prendinger", "suffix": "" }, { "first": "Mitsuru", "middle": [], "last": "Ishizuka", "suffix": "" } ], "year": 2009, "venue": "Affective Information Processing", "volume": "", "issue": "", "pages": "45--73", "other_ids": { "DOI": [ "10.1007/978-1-84800-306-4_4" ] }, "num": null, "urls": [], "raw_text": "Mostafa Al Masum Shaikh, Helmut Prendinger, and Mitsuru Ishizuka. 2009. A linguistic interpretation of the occ emotion model for affect sensing from text. Affective Information Processing, pages 45-73.", "links": null }, "BIBREF45": { "ref_id": "b45", "title": "Early versus late fusion in semantic video analysis", "authors": [ { "first": "G", "middle": [ "M" ], "last": "Cees", "suffix": "" }, { "first": "Marcel", "middle": [], "last": "Snoek", "suffix": "" }, { "first": "Arnold Wm", "middle": [], "last": "Worring", "suffix": "" }, { "first": "", "middle": [], "last": "Smeulders", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 13th annual ACM international conference on Multimedia", "volume": "", "issue": "", "pages": "399--402", "other_ids": { "DOI": [ "10.1145/1101149.1101236" ] }, "num": null, "urls": [], "raw_text": "Cees GM Snoek, Marcel Worring, and Arnold WM Smeulders. 2005. Early versus late fusion in seman- tic video analysis. In Proceedings of the 13th annual ACM international conference on Multimedia, pages 399-402.", "links": null }, "BIBREF46": { "ref_id": "b46", "title": "Detecting concept-level emotion cause in microblogging", "authors": [ { "first": "Shuangyong", "middle": [], "last": "Song", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Meng", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 24th International Conference on World Wide Web, WWW '15 Companion", "volume": "", "issue": "", "pages": "119--120", "other_ids": { "DOI": [ "10.1145/2740908.2742710" ] }, "num": null, "urls": [], "raw_text": "Shuangyong Song and Yao Meng. 2015. Detecting concept-level emotion cause in microblogging. In Proceedings of the 24th International Conference on World Wide Web, WWW '15 Companion, page 119-120, New York, NY, USA. Association for Computing Machinery.", "links": null }, "BIBREF47": { "ref_id": "b47", "title": "SemEval-2007 task 14: Affective text", "authors": [ { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007)", "volume": "", "issue": "", "pages": "70--74", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlo Strapparava and Rada Mihalcea. 2007. SemEval- 2007 task 14: Affective text. In Proceedings of the Fourth International Workshop on Semantic Evalua- tions (SemEval-2007), pages 70-74, Prague, Czech Republic. Association for Computational Linguis- tics.", "links": null }, "BIBREF48": { "ref_id": "b48", "title": "Categorizing and inferring the relationship between the text and image of Twitter posts", "authors": [ { "first": "Alakananda", "middle": [], "last": "Vempala", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Preo\u0163iuc-Pietro", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "2830--2840", "other_ids": { "DOI": [ "10.18653/v1/P19-1272" ] }, "num": null, "urls": [], "raw_text": "Alakananda Vempala and Daniel Preo\u0163iuc-Pietro. 2019. Categorizing and inferring the relationship between the text and image of Twitter posts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2830-2840, Flo- rence, Italy. Association for Computational Linguis- tics.", "links": null }, "BIBREF49": { "ref_id": "b49", "title": "Harnessing Twitter \"Big Data\" for automatic emotion identification", "authors": [ { "first": "Wenbo", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Krishnaprasad", "middle": [], "last": "Thirunarayan", "suffix": "" }, { "first": "Amit", "middle": [ "P" ], "last": "Sheth", "suffix": "" } ], "year": 2012, "venue": "2012 International Conference on Privacy", "volume": "", "issue": "", "pages": "587--592", "other_ids": { "DOI": [ "10.1109/SocialCom-PASSAT.2012.119" ] }, "num": null, "urls": [], "raw_text": "Wenbo Wang, Lu Chen, Krishnaprasad Thirunarayan, and Amit P. Sheth. 2012. Harnessing Twitter \"Big Data\" for automatic emotion identification. In 2012 International Conference on Privacy, Security, Risk and Trust and 2012 International Confernece on So- cial Computing, pages 587-592. IEEE.", "links": null }, "BIBREF50": { "ref_id": "b50", "title": "Knowledge-rich image gist understanding beyond literal meaning", "authors": [ { "first": "Lydia", "middle": [], "last": "Weiland", "suffix": "" }, { "first": "Ioana", "middle": [], "last": "Hulpu\u015f", "suffix": "" }, { "first": "Simone", "middle": [ "Paolo" ], "last": "Ponzetto", "suffix": "" }, { "first": "Wolfgang", "middle": [], "last": "Effelsberg", "suffix": "" }, { "first": "Laura", "middle": [], "last": "Dietz", "suffix": "" } ], "year": 2018, "venue": "Data & Knowledge Engineering", "volume": "117", "issue": "", "pages": "114--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lydia Weiland, Ioana Hulpu\u015f, Simone Paolo Ponzetto, Wolfgang Effelsberg, and Laura Dietz. 2018. Knowledge-rich image gist understanding beyond literal meaning. Data & Knowledge Engineering, 117:114-132.", "links": null }, "BIBREF51": { "ref_id": "b51", "title": "Visual sentiment analysis by combining global and local information", "authors": [ { "first": "Lifang", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Mingchao", "middle": [], "last": "Qi", "suffix": "" }, { "first": "Meng", "middle": [], "last": "Jian", "suffix": "" }, { "first": "Heng", "middle": [], "last": "Zhang", "suffix": "" } ], "year": 2020, "venue": "Neural Process. Lett", "volume": "51", "issue": "3", "pages": "2063--2075", "other_ids": { "DOI": [ "10.1007/s11063-019-10027-7" ] }, "num": null, "urls": [], "raw_text": "Lifang Wu, Mingchao Qi, Meng Jian, and Heng Zhang. 2020. Visual sentiment analysis by combining global and local information. Neural Process. Lett., 51(3):2063-2075.", "links": null }, "BIBREF52": { "ref_id": "b52", "title": "Emotion-cause pair extraction: A new task to emotion analysis in texts", "authors": [ { "first": "Rui", "middle": [], "last": "Xia", "suffix": "" }, { "first": "Zixiang", "middle": [], "last": "Ding", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1003--1012", "other_ids": { "DOI": [ "10.18653/v1/P19-1096" ] }, "num": null, "urls": [], "raw_text": "Rui Xia and Zixiang Ding. 2019. Emotion-cause pair extraction: A new task to emotion analysis in texts. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1003-1012, Florence, Italy. Association for Compu- tational Linguistics.", "links": null }, "BIBREF53": { "ref_id": "b53", "title": "Weakly supervised coupled networks for visual sentiment analysis", "authors": [ { "first": "Jufeng", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Dongyu", "middle": [], "last": "She", "suffix": "" }, { "first": "Yu-Kun", "middle": [], "last": "Lai", "suffix": "" }, { "first": "Paul", "middle": [ "L" ], "last": "Rosin", "suffix": "" }, { "first": "Ming-Hsuan", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2018, "venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition", "volume": "", "issue": "", "pages": "7584--7592", "other_ids": { "DOI": [ "10.1109/CVPR.2018.00791" ] }, "num": null, "urls": [], "raw_text": "Jufeng Yang, Dongyu She, Yu-Kun Lai, Paul L. Rosin, and Ming-Hsuan Yang. 2018. Weakly supervised coupled networks for visual sentiment analysis. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7584-7592.", "links": null }, "BIBREF54": { "ref_id": "b54", "title": "Equal but not the same: Understanding the implicit relationship between persuasive images and text", "authors": [ { "first": "Mingda", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Hwa", "suffix": "" }, { "first": "Adriana", "middle": [], "last": "Kovashka", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the British Machine Vision Conference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingda Zhang, Rebecca Hwa, and Adriana Kovashka. 2018. Equal but not the same: Understanding the implicit relationship between persuasive images and text. In Proceedings of the British Machine Vision Conference.", "links": null }, "BIBREF55": { "ref_id": "b55", "title": "Affective image content analysis: a comprehensive survey", "authors": [ { "first": "Sicheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Guiguang", "middle": [], "last": "Ding", "suffix": "" }, { "first": "Qingming", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Tat-Seng", "middle": [], "last": "Chua", "suffix": "" }, { "first": "W", "middle": [], "last": "Bj\u00f6rn", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Schuller", "suffix": "" }, { "first": "", "middle": [], "last": "Keutzer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence Survey track", "volume": "", "issue": "", "pages": "5534--5541", "other_ids": { "DOI": [ "10.24963/ijcai.2018/780" ] }, "num": null, "urls": [], "raw_text": "Sicheng Zhao, Guiguang Ding, Qingming Huang, Tat- Seng Chua, Bj\u00f6rn W Schuller, and Kurt Keutzer. 2018. Affective image content analysis: a com- prehensive survey. In Proceedings of the Twenty- Seventh International Joint Conference on Artificial Intelligence Survey track, pages 5534-5541.", "links": null }, "BIBREF56": { "ref_id": "b56", "title": "When saliency meets sentiment: Understanding how image content invokes emotion and sentiment", "authors": [ { "first": "Honglin", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Tianlang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Quanzeng", "middle": [], "last": "You", "suffix": "" }, { "first": "Jiebo", "middle": [], "last": "Luo", "suffix": "" } ], "year": 2017, "venue": "2017 IEEE International Conference on Image Processing (ICIP)", "volume": "", "issue": "", "pages": "630--634", "other_ids": { "DOI": [ "10.1109/ICIP.2017.8296357" ] }, "num": null, "urls": [], "raw_text": "Honglin Zheng, Tianlang Chen, Quanzeng You, and Jiebo Luo. 2017. When saliency meets sentiment: Understanding how image content invokes emotion and sentiment. In 2017 IEEE International Confer- ence on Image Processing (ICIP), pages 630-634.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "Example of posts from Reddit (annotation are emotion/relation/stimulus category).", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "Example of image-text relationships in posts.", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "Emotion-Stimulus Cooccurrences.", "uris": null, "type_str": "figure" }, "FIGREF3": { "num": null, "text": "Annotation Environment on Amazon Mechanical Turk", "uris": null, "type_str": "figure" }, "TABREF2": { "content": "
StimulusStimulus
Ad Animal Art Event Food Meme Object Person Place Screen
Anger77 20 11 3 30 21 16 5 110Anger3.01 0.30 0.96 0.57 0.25 2.29 0.69 0.36 0.80 4.68
250
Anticipation18 20 77 16 17 19 3 45Anticipation0.51 0.63 1.87 0.61 1.14 1.76 1.00 0.87 0.79 1.134
Disgust9 21 39 32 29 43 68 47 13 158200Disgust1.92 0.45 0.96 0.93 1.83 1.56 1.53 0.57 1.16 1.61
3
EmotionFear Joy0 14 15 8 7 104 89 80 44 65 106 178 31 237 1 13 16 17 1 34150EmotionFear Joy0.00 1.82 1.83 1.00 0.19 1.92 1.36 1.11 0.33 1.14 0.34 2.20 1.06 1.27 1.04 0.79 0.77 2.06 1.69 0.422
100
Sadness2 14 17 72 28 14 18 3 56Sadness0.88 0.98 1.15 0.48 0.23 3.14 0.60 0.62 0.63 1.21
501
Surprise13 40 69 50 26 52 106 68 21 263Surprise1.88 0.49 1.14 0.85 0.69 0.96 1.58 0.41 1.21 2.04
Trust22930 20 2 10 1 430Trust1.79 0.23 1.15 0.40 0.00 4.81 0.15 0.68 0.40 4.150
(a) Counts(b) Odds Ratio
", "type_str": "table", "html": null, "text": "Ad Animal Art Event Food Meme Object Person Place Screen", "num": null }, "TABREF4": { "content": "
: Experimental results in predicting emotions, relations, and stimuli using unimodal and multimodal models. The results are presented in weighted F 1 score. Bold face indicates the highest value in each column/task.
", "type_str": "table", "html": null, "text": "", "num": null }, "TABREF6": { "content": "", "type_str": "table", "html": null, "text": "Experimental results for all labels in predicting emotions, relations, and stimuli using the text-based and image-based unimodal models, and fusion models. The results are presented in F1 score.", "num": null }, "TABREF7": { "content": "
", "type_str": "table", "html": null, "text": "Qualifications used on AMT for data annotation.", "num": null } } } }