{ "paper_id": "W10-0201", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T05:07:41.264381Z" }, "title": "Emotion Analysis Using Latent Affective Folding and Embedding", "authors": [ { "first": "Jerome", "middle": [ "R" ], "last": "Bellegarda", "suffix": "", "affiliation": { "laboratory": "", "institution": "Speech & Language Technologies Apple Inc. Cupertino", "location": { "postCode": "95014", "region": "California", "country": "USA" } }, "email": "jerome@apple.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Though data-driven in nature, emotion analysis based on latent semantic analysis still relies on some measure of expert knowledge in order to isolate the emotional keywords or keysets necessary to the construction of affective categories. This makes it vulnerable to any discrepancy between the ensuing taxonomy of affective states and the underlying domain of discourse. This paper proposes a more general strategy which leverages two distincts semantic levels, one that encapsulates the foundations of the domain considered, and one that specifically accounts for the overall affective fabric of the language. Exposing the emergent relationship between these two levels advantageously informs the emotion classification process. Empirical evidence suggests that this is a promising solution for automatic emotion detection in text.", "pdf_parse": { "paper_id": "W10-0201", "_pdf_hash": "", "abstract": [ { "text": "Though data-driven in nature, emotion analysis based on latent semantic analysis still relies on some measure of expert knowledge in order to isolate the emotional keywords or keysets necessary to the construction of affective categories. This makes it vulnerable to any discrepancy between the ensuing taxonomy of affective states and the underlying domain of discourse. This paper proposes a more general strategy which leverages two distincts semantic levels, one that encapsulates the foundations of the domain considered, and one that specifically accounts for the overall affective fabric of the language. Exposing the emergent relationship between these two levels advantageously informs the emotion classification process. Empirical evidence suggests that this is a promising solution for automatic emotion detection in text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The automatic detection of emotions in text is a necessary pre-processing step in many different fields touching on affective computing (Picard, 1997) , such as natural language interfaces (Cosatto et al., 2003) , e-learning environments (Ryan et al., 2000) , educational or entertainment games (Pivec and Kearney, 2007) , opinion mining and sentiment analysis (Pang and Lee, 2008) , humor recognition (Mihalcea and Strapparava, 2006) , and security informatics (Abbasi, 2007) . In the latter case, for example, it can be used for monitoring levels of hateful or violent rhetoric (perhaps in multilingual settings). More generally, emotion detection is of great interest in human-computer interaction: if a system determines that a user is upset or annoyed, for instance, it could switch to a different mode of interaction (Liscombe et al., 2005) . And of course, it plays a critical role in the generation of expressive synthetic speech (Schr\u00f6der, 2006) .", "cite_spans": [ { "start": 136, "end": 150, "text": "(Picard, 1997)", "ref_id": "BIBREF17" }, { "start": 189, "end": 211, "text": "(Cosatto et al., 2003)", "ref_id": "BIBREF5" }, { "start": 238, "end": 257, "text": "(Ryan et al., 2000)", "ref_id": "BIBREF20" }, { "start": 295, "end": 320, "text": "(Pivec and Kearney, 2007)", "ref_id": "BIBREF18" }, { "start": 361, "end": 381, "text": "(Pang and Lee, 2008)", "ref_id": "BIBREF16" }, { "start": 402, "end": 434, "text": "(Mihalcea and Strapparava, 2006)", "ref_id": "BIBREF14" }, { "start": 462, "end": 476, "text": "(Abbasi, 2007)", "ref_id": "BIBREF0" }, { "start": 823, "end": 846, "text": "(Liscombe et al., 2005)", "ref_id": "BIBREF11" }, { "start": 938, "end": 954, "text": "(Schr\u00f6der, 2006)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Emphasis has traditionally been placed on the set of six \"universal\" emotions (Ekman, 1993) : ANGER, DISGUST, FEAR, JOY, SADNESS, and SURPRISE (Alm et al., 2005; Liu et al., 2003; Subasic and Huettner, 2001 ). Emotion analysis is typically carried out using a simplified description of emotional states in a low-dimensional space, which normally comprises dimensions such as valence (positive/negative evalution), activation (stimulation of activity), and/or control (dominant/submissive power) (Mehrabian, 1995; Russell, 1980; Strapparava and Mihalcea, 2008) . Classification proceeds based on an underlying emotional knowledge base, which strives to provide adequate distinctions between different emotions. This affective information can either be built entirely upon manually selected vocabulary as in (Whissell, 1989) , or derived automatically from data based on expert knowledge of the most relevant features that can be extracted from the input text (Alm et al., 2005) . In both cases, the resulting system tends to rely, for the most part, on a few thousand annotated \"emotional keywords,\" the presence of which triggers the associated emotional label(s).", "cite_spans": [ { "start": 78, "end": 91, "text": "(Ekman, 1993)", "ref_id": "BIBREF7" }, { "start": 143, "end": 161, "text": "(Alm et al., 2005;", "ref_id": "BIBREF1" }, { "start": 162, "end": 179, "text": "Liu et al., 2003;", "ref_id": "BIBREF12" }, { "start": 180, "end": 206, "text": "Subasic and Huettner, 2001", "ref_id": "BIBREF26" }, { "start": 495, "end": 512, "text": "(Mehrabian, 1995;", "ref_id": "BIBREF13" }, { "start": 513, "end": 527, "text": "Russell, 1980;", "ref_id": "BIBREF19" }, { "start": 528, "end": 559, "text": "Strapparava and Mihalcea, 2008)", "ref_id": "BIBREF24" }, { "start": 806, "end": 822, "text": "(Whissell, 1989)", "ref_id": "BIBREF27" }, { "start": 958, "end": 976, "text": "(Alm et al., 2005)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The drawback of such confined lexical affinity is that the analysis tends to be hampered by the bias inherent in the underlying taxonomy of emotional states. Because this taxonomy only supports simplified relationships between affective words and emo-tional categories, it often fails to meaningfully generalize beyond the relatively few core terms explicitly considered in its construction. This has sparked interest in data-driven approaches based on latent semantic analysis (LSA), a paradigm originally developed for information retrieval (Deerwester et al., 1990) . Upon suitable training using a large corpus of texts, LSA allows a similarity score to be computed between generic terms and affective categories . This way, every word can automatically be assigned some fractional affective influence. Still, the affective categories themselves are usually specified with the help of a reference lexical database like WordNet (Fellbaum, 1998) .", "cite_spans": [ { "start": 543, "end": 568, "text": "(Deerwester et al., 1990)", "ref_id": "BIBREF6" }, { "start": 931, "end": 947, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The purpose of this paper is to more broadly leverage the principle of latent semantics in emotion analysis. We cast the problem as a general application of latent semantic mapping (LSM), an extrapolation of LSA for modeling global relationships implicit in large volumes of data (Bellegarda, 2005; Bellegarda, 2008) . More specifically, we use the LSM framework to describe two distinct semantic levels: one that encapsulates the foundations of the domain considered (e.g., broadcast news, email messages, SMS conversations, etc.), and one that specifically accounts for the overall affective fabric of the language. Then, we leverage these two descriptions to appropriately relate domain and affective levels, and thereby inform the emotion classification process. This de facto bypasses the need for any explicit external knowledge.", "cite_spans": [ { "start": 280, "end": 298, "text": "(Bellegarda, 2005;", "ref_id": "BIBREF2" }, { "start": 299, "end": 316, "text": "Bellegarda, 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The paper is organized as follows. The next section provides some motivation for, and gives an overview of, the proposed latent affective framework. In Sections 3 and 4, we describe the two main alternatives considered, latent folding and latent embedding. In Section 5, we discuss the mechanics of emotion detection based on such latent affective processing. Finally, Section 6 reports the outcome of experimental evaluations conducted on the \"Affective Text\" portion of the SemEval-2007 corpus (Strapparava and Mihalcea, 2007) .", "cite_spans": [ { "start": 496, "end": 528, "text": "(Strapparava and Mihalcea, 2007)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "As alluded to above, lexical affinity alone fails to provide sufficient distinction between different emotions, in large part because only relatively few words have inherently clear, unambiguous emotional meaning. For example, happy and sad encapsulate JOY and SADNESS, respectively, in all conceivable scenarios. But is thrilling a marker of JOY or SURPRISE? Does awful capture SADNESS or DIS-GUST? It largely depends on contextual information: thrilling as a synonym for uplifting conveys JOY (as in a thrilling speech), while thrilling as a synonym for amazing may well mark SURPRISE (as in a thrilling waterfall ride); similarly, awful as a synonym for grave reflects SADNESS (as in an awful car accident), while awful as a synonym for foul is closer to DISGUST (as in an awful smell). The vast majority of words likewise carry multiple potential emotional connotations, with the degree of affective polysemy tightly linked to the granularity selected for the underlying taxonomy of emotions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and Overview", "sec_num": "2" }, { "text": "Data-driven approaches based on LSA purport to \"individuate\" such indirect affective words via inference mechanisms automatically derived in an unsupervised way from a large corpus of texts, such as the British National Corpus . By looking at document-level cooccurrences, contextual information is exploited to encapsulate semantic information into a relatively low dimensional vector space. Suitable affective categories are then constructed in that space by \"folding in\" either the specific word denoting the emotion, or its associated synset (say, from WordNet), or even the entire set of words in all synsets that can be labelled with that emotion (Strapparava and Mihalcea, 2008) . This is typically done by placing the relevant word(s) into a \"pseudo-document,\" and map it into the space as if it were a real one (Deerwester et al., 1990) . Finally, the global emotional affinity of a given input text is determined by computing similarities between all pseudo-documents. The resulting framework is depicted in Fig. 1 . 2", "cite_spans": [ { "start": 653, "end": 685, "text": "(Strapparava and Mihalcea, 2008)", "ref_id": "BIBREF24" }, { "start": 820, "end": 845, "text": "(Deerwester et al., 1990)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 1018, "end": 1024, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Motivation and Overview", "sec_num": "2" }, { "text": "This solution is attractive, if for no other reason than it allows every word to automatically be assigned some fractional affective influence. However, it suffers from two limitations which may well prove deleterious in practical situations. First, the inherent lack of supervision routinely leads to a latent semantic space which is not particularly representative of the underlying domain of discourse. And second, the construction of the affective categories still relies heavily on pre-defined lexical affinity, potentially resulting in an unwarranted bias in the taxonomy of affective states.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and Overview", "sec_num": "2" }, { "text": "The first limitation impinges on the effectiveness of any LSA-based approach, which is known to vary substantially based on the size and quality of the training data (Bellegarda, 2008; Mohler and Mihalcea, 2009) . In the present case, any discrepancy between latent semantic space and domain of discourse may distort the position of certain words in the space, which could in turn lead to subsequent sub-optimal affective weight assignment. For instance, in the examples above, the word smell is considerably more critical to the resolution of awful as a marker of DISGUST than the word car. But that fact may never be uncovered if the only pertinent documents in the training corpus happen to be about expensive fragrances and automobiles. Thus, it is highly desirable to derive the latent semantic space using data representative of the application considered. This points to a modicum of supervision.", "cite_spans": [ { "start": 166, "end": 184, "text": "(Bellegarda, 2008;", "ref_id": "BIBREF3" }, { "start": 185, "end": 211, "text": "Mohler and Mihalcea, 2009)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation and Overview", "sec_num": "2" }, { "text": "The second limitation is tied to the difficulty of coming up with an a priori affective description that will work universally. Stipulating the affective categories using only the specific word denoting the emotion is likely to be less robust than using the set of words in all synsets labelled with that emotion. On the other hand, the latter may well expose some inherent ambiguities resulting from affective polysemy. This is compounded by the relatively small number of words for which an affective distribution is even available. For example, the well-known General Inquirer content analysis system (Stone, 1997) lists only about 2000 words with positive outlook and 2000 words with negative outlook. There are exactly 1281 words inventoried in the affective extension of WordNet (Strapparava and Mihalcea, 2008) , and the affective word list from (Johnson-Laird and Oatley, 1989) comprises less than 1000 words. This considerably complicates the construction of reliable affective categories in the latent space.", "cite_spans": [ { "start": 604, "end": 617, "text": "(Stone, 1997)", "ref_id": "BIBREF22" }, { "start": 785, "end": 817, "text": "(Strapparava and Mihalcea, 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation and Overview", "sec_num": "2" }, { "text": "To address the two limitations above, we propose to more broadly leverage the LSM paradigm (Bellegarda, 2005; Bellegarda, 2008) , following the overall framework depicted in Fig. 2 . Compared to Fig. 1 , we inject some supervision at two separate levels: not only regarding the particular domain considered, but also how the affective categories themselves are defined. The first task is to exploit a suitable training collection to encapsulate into a (domain) latent semantic space the general foundations of the domain at hand. Next, we leverage a separate affective corpus, such as mood-annotated blog entries from LiveJournal.com (Strapparava and Mihalcea, 2008) , to serve as a descriptive blueprint for the construction of affective categories.", "cite_spans": [ { "start": 91, "end": 109, "text": "(Bellegarda, 2005;", "ref_id": "BIBREF2" }, { "start": 110, "end": 127, "text": "Bellegarda, 2008)", "ref_id": "BIBREF3" }, { "start": 634, "end": 666, "text": "(Strapparava and Mihalcea, 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 174, "end": 180, "text": "Fig. 2", "ref_id": "FIGREF1" }, { "start": 195, "end": 201, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Motivation and Overview", "sec_num": "2" }, { "text": "This blueprint is then folded into the domain space in one of two ways. The easiest approach, called latent affective folding, is simply to superimpose affective anchors inferred in the space for every affective category. This is largely analogous to what happens in Fig. 1 , with a crucial difference regarding the representation of affective categories: in latent affective folding, it is derived from a corpus of texts as opposed to a pre-specified keyword or keyset. This is likely to help making the categories more robust, but may not satisfactorily resolve subtle distinctions between emotional connotations. This technique is described in detail in the next section.", "cite_spans": [], "ref_spans": [ { "start": 267, "end": 273, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Motivation and Overview", "sec_num": "2" }, { "text": "The second approach, called latent affective embedding, is to extract a distinct LSM representation from the affective corpus, to encapsulate all prior affective information into a separate (affective) latent semantic space. In this space, affective anchors can be computed directly, instead of inferred after folding, presumably leading to a more accurate positioning. Domain and affective LSM spaces can then be related to each other via a mapping derived from words that are common to both. This way, the affective anchors can be precisely embedded into the domain space. This technique is described in detail in Section 4.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and Overview", "sec_num": "2" }, { "text": "In both cases, the input text is mapped into the domain space as before. Emotion classification then follows from assessing how closely it aligns with each affective anchor.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation and Overview", "sec_num": "2" }, { "text": "Expanding the basic framework of Fig. 2 to take into account the two separate phases of training and analysis, latent affective folding proceeds as illustrated in Fig. 3 .", "cite_spans": [], "ref_spans": [ { "start": 33, "end": 39, "text": "Fig. 2", "ref_id": "FIGREF1" }, { "start": 163, "end": 169, "text": "Fig. 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "Let T 1 , |T 1 | = N 1 , be a collection of training texts (be they sentences, paragraphs, or documents) reflecting the domain of interest, and V 1 , |V 1 | = M 1 , the associated set of all words (possibly augmented with some strategic word pairs, triplets, etc., as appropriate) observed in this collection. Generally, M 1 is on the order of several tens of thousands, while N 1 may be as high as a million.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "We first construct a (M 1 \u00d7 N 1 ) matrix W 1 , whose elements w ij suitably reflect the extent to which each word w i \u2208 V 1 appeared in each text t j \u2208 T 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "From (Bellegarda, 2008) , a reasonable expression for w ij is:", "cite_spans": [ { "start": 5, "end": 23, "text": "(Bellegarda, 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w i,j = (1 \u2212 \u03b5 i ) c i,j n j ,", "eq_num": "(1)" } ], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "where c i,j is the number of times w i occurs in text t j , n j is the total number of words present in this text, and \u03b5 i is the normalized entropy of w i in V 1 . The global weighting implied by 1 \u2212 \u03b5 i reflects the fact that two words appearing with the same count in a particular text do not necessarily convey the same amount of information; this is subordinated to the distribution of words in the entire set V 1 . We then perform a singular value decomposition (SVD) of W 1 as (Bellegarda, 2008) :", "cite_spans": [ { "start": 484, "end": 502, "text": "(Bellegarda, 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W 1 = U 1 S 1 V T 1 ,", "eq_num": "(2)" } ], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "U 1 is the (M 1 \u00d7 R 1 ) left singular matrix with row vectors u 1,i (1 \u2264 i \u2264 M 1 ), S 1 is the (R 1 \u00d7 R 1 ) diagonal matrix of singular values s 1,1 \u2265 s 1,2 \u2265 . . . \u2265 s 1,R 1 > 0, V 1 is the (N 1 \u00d7 R 1 ) right sin- gular matrix with row vectors v 1,j (1 \u2264 j \u2264 N 1 ), R 1 M 1 , N 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "is the order of the decomposition, and T denotes matrix transposition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "As is well known, both left and right singular matrices U 1 and V 1 are column-orthonormal, i.e., U T", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "1 U 1 = V T 1 V 1 = I R 1 (the identity matrix of order R 1 ). Thus, the column vectors of U 1 and V 1 each define an orthornormal basis for the space of dimension R 1 spanned by the u 1,i 's and v 1,j 's. We refer to this space as the latent semantic space L 1 . The (rank-R 1 ) decomposition (2) encapsulates a mapping between the set of words w i and texts t j and (after apropriate scaling by the singular values) the set of", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "R 1 -dimensional vectors y 1,i = u 1,i S 1 and z 1,j = v 1,j S 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "The basic idea behind (2) is that the rank-R 1 decomposition captures the major structural associations in W 1 and ignores higher order effects. Hence, the relative positions of the input words in the space L 1 reflect a parsimonious encoding of the semantic concepts used in the domain considered. This means that any new text mapped onto a vector \"close\" (in some suitable metric) to a particular set of words can be expected to be closely related to the concept encapsulated by this set. If each of these words is then scored in terms of their affective affinity, this offers a way to automatically predict the overall emotional affinity of the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "In order to do so, we need to isolate regions in that space which are representative of the underlying taxonomy of emotions considered. The centroid of each such region is the affective anchor associated with that basic emotion. Affective anchors are superimposed onto the space L 1 on the basis of the affective corpus available.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "Let T 2 , |T 2 | = N 2 , represent a separate collection of mood-annotated texts (again they could be sentences, paragraphs, or documents), representative of the desired categories of emotions (such as JOY and SADNESS), and V 2 , |V 2 | = M 2 , the associated set of words or expressions observed in this collection. As such affective data may be more difficult to gather than regular texts (especially in annotated form), in practice N 2 < N 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "Further let V 12 , |V 12 | = M 12 , represent the intersection between V 1 and V 2 . We will denote the representations of these words in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "L 1 by \u03bb 1,k (1 \u2264 k \u2264 M 12 ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "Clearly, it is possible to form, for each 1 \u2264 \u2264 L, where L is the number of distinct emotions considered, each subset V ( ) 12 of all entries from V 12 which is aligned with a particular emotion. 1 We can then compute:\u1e91", "cite_spans": [ { "start": 120, "end": 123, "text": "( )", "ref_id": null }, { "start": 196, "end": 197, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "1, = 1 |V ( ) 12 | V ( ) 12 \u03bb 1,k ,", "eq_num": "(3)" } ], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "as the affective anchor of emotion (1 \u2264 \u2264 L) in the domain space. The notation\u1e91 1, is chosen to underscore the connection with z 1,j : in essence,\u1e91 1, represents the (fictitious) text in the domain space that would be perfectly aligned with emotion , had it been seen the training collection T 1 . Comparing the representation of an input text to each of these anchors therefore leads to a quantitative assessment for the overall emotional affinity of the text. A potential drawback of this approach is that (3) is patently sensitive to the distribution of words within T 2 , which may be quite different from the distribution of words within T 1 . In such a case, \"folding in\" the affective anchors as described above may well introduce a bias in the position of the anchors in the domain space. This could in turn lead to an inability to satisfactorily resolve subtle distinctions between emotional connotations. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Folding", "sec_num": "3" }, { "text": "To remedy this situation, a natural solution is to build a separate LSM space from the affective training data. Referring back to the basic framework of Fig. 2 and taking into account the two separate phases of training and analysis as in Fig. 3 , latent affective embedding proceeds as illustrated in Fig. 4 .", "cite_spans": [], "ref_spans": [ { "start": 153, "end": 159, "text": "Fig. 2", "ref_id": "FIGREF1" }, { "start": 239, "end": 245, "text": "Fig. 3", "ref_id": "FIGREF2" }, { "start": 302, "end": 308, "text": "Fig. 4", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "The first task is to group all N 2 documents present in T 2 into L bins, one for each of the emotions considered. Then we can construct a (M 2 \u00d7 L) matrix W 2 , whose elements w k, suitably reflect the extent to which each word or expression w k \u2208 V 2 appeared in each affective category c , 1 \u2264 \u2264 L. This leads to:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "w k, = (1 \u2212 \u03b5 k ) c k, n ,", "eq_num": "(4)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "with c k, , n , and \u03b5 k following definitions analogous to (1), albeit with domain texts replaced by affective categories.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "We then perform the SVD of W 2 in a similar vein as (2):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "W 2 = U 2 S 2 V T 2 ,", "eq_num": "(5)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "where all definitions are analogous. As before, both left and right singular matrices U 2 and V 2 are column-orthonormal, and their column vectors each define an orthornormal basis for the space of dimension R 2 spanned by the u 2,k 's and v 2, 's. We refer to this space as the latent affective space L 2 . The (rank-R 2 ) decomposition (5) encapsulates a mapping between the set of words w k and categories c and (after apropriate scaling by the singular values) the set of R 2 -dimensional vectors y 2,k = u 2,k S 2 and z 2, = v 2, S 2 . Thus, each vector z 2, can be viewed as the centroid of an emotion in L 2 , or, said another way, an affective anchor in the affective space. Since their relative positions reflect a parsimonious encoding of the affective annotations observed in the emotion corpus, these affective anchors now properly take into account any accidental skew in the distribution of words which contribute to them. All that remains to do is map them back to the domain space. This is done on the basis of words that are common to both the affective space and the domain space, i.e., the words in V 12 . Since these words were denoted by \u03bb 1,k in L 1 , we similarly denote them by", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "\u03bb 2,k (1 \u2264 k \u2264 M 12 ) in L 2 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "Now let \u03bc 1 , \u03bc 2 and \u03a3 1 , \u03a3 2 denote the mean vector and covariance matrix for all observations \u03bb 1,k and \u03bb 2,k in the two spaces, respectively. We first transform each feature vector as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03bb 1,k = \u03a3 \u22121/2 1 (\u03bb 1,k \u2212 \u03bc 1 ) ,", "eq_num": "(6)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03bb 2,k = \u03a3 \u22121/2 2 (\u03bb 2,k \u2212 \u03bc 2 ) ,", "eq_num": "(7)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "so that the resulting sets {\u03bb 1,k } and {\u03bb 2,k } each have zero mean and identity covariance matrix. For this purpose, the inverse square root of each covariance matrix can be obtained as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03a3 \u22121/2 = Q\u0394 \u22121/2 Q T ,", "eq_num": "(8)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "where Q is the eigenvector matrix of the covariance matrix \u03a3, and \u0394 is the diagonal matrix of corresponding eigenvalues. This applies to both domain and affective data. We next relate each vector\u03bb 2,k in the affective space to the corresponding vector\u03bb 1,k in the domain space. For a relative measure of how the two spaces are correlated with each other, as accumulated on a common word basis, we first project\u03bb 1,k into the unit sphere of same dimension as\u03bb 2,k , i.e., R 2 = min(R 1 , R 2 ). We then compute the (normalized) cross-covariance matrix between the two unit sphere representations, specified as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K 12 = M 12 k=1 P\u03bb 1,k P T\u03bb T 2,k ,", "eq_num": "(9)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "where P is the R 1 to R 2 projection matrix. Note that K 12 is typically full rank as long as M 12 > R 2 2 . Performing the SVD of K 12 yields the expression:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "K 12 = \u03a6 \u03a9 \u03a8 T ,", "eq_num": "(10)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "where as before \u03a9 is the diagonal matrix of singular values, and \u03a6 and \u03a8 are both unitary in the unit sphere of dimension R 2 . This in turn leads to the definition:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u0393 = \u03a6\u03a8 T ,", "eq_num": "(11)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "which can be shown (cf. (Bellegarda et al., 1994)) to represent the least squares rotation that must be applied (in that unit sphere) to\u03bb 2,k to obtain an estimate of P\u03bb 1,k P T . Now what is needed is to apply this transformation to the centroids z 2, (1 \u2264 \u2264 L) of the affective categories in the affective space, so as to map them to the domain space. We first project each vector into the unit sphere, resulting in:", "cite_spans": [ { "start": 24, "end": 50, "text": "(Bellegarda et al., 1994))", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z 2, = \u03a3 \u22121/2 2 (z 2, \u2212 \u03bc 2 ) ,", "eq_num": "(12)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "as prescribed in (7). We then synthesize fromz 2, a unit sphere vector corresponding to the estimate in the projected domain space. From the foregoing, this estimate is given by:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z 1, = \u0393z 2, .", "eq_num": "(13)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "Finally, we restore the resulting contribution at the appropriate place in the domain space, by reversing the transformation (6):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z 1, = \u03a3 1/2 1\u1e91 1, + \u03bc 1 .", "eq_num": "(14)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "Combining the three steps (12)-(14) together, the overall mapping can be written as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "z 1, = (\u03a3 1/2 1 \u0393\u03a3 \u22121/2 2 ) z 2, + (\u03bc 1 \u2212\u03a3 1/2 1 \u0393\u03a3 \u22121/2 2 \u03bc 2 ) .", "eq_num": "(15)" } ], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "This expression stipulates how to leverage the observed affective anchors z 2, in the affective space to obtain an estimate of the unobserved affective anchors\u1e91 1, in the domain space, for 1 \u2264 \u2264 L. The overall procedure is illustrated in Fig. 5 (in the simple case of two dimensions).", "cite_spans": [], "ref_spans": [ { "start": 238, "end": 244, "text": "Fig. 5", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "Once the affective anchors are suitably embedded into the domain space, we proceed as before to compare the representation of a given input text to each of these anchors, which leads to the desired quantitative assessment for the overall emotional affinity of the text. 6 ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Latent Affective Embedding", "sec_num": "4" }, { "text": "To summarize, using either latent affective folding or latent affective embedding, we end up with an estimate\u1e91 1, of the affective anchor for each emotion in the domain space L 1 . What remains to be described is how to perform emotion classification in that space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Classification", "sec_num": "5" }, { "text": "To proceed, we first need to specify how to represent in that space an input text not seen in the training corpus, say t p (where p > N 1 ). For each entry in T 1 , we compute for the new text the weighted counts (1) with j = p. The resulting feature vector, a column vector of dimension N 1 , can be thought of as an additional column of the matrix W 1 . Assuming the matrices U 1 and S 1 do not change appreciably, the SVD expansion (2) therefore implies:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Classification", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "t p = U 1 S 1 v T 1,p ,", "eq_num": "(16)" } ], "section": "Emotion Classification", "sec_num": "5" }, { "text": "where the R 1 -dimensional vector v T 1,p acts as an additional column of the matrix V T 1 . Thus, the represention of the new text in the domain space can be obtained from z 1,p = v 1,p S 1 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Classification", "sec_num": "5" }, { "text": "All is needed now is a suitable closeness measure to compare this representation to each affective anchor\u1e91 1, (1 \u2264 \u2264 L). From (Bellegarda, 2008) , a natural metric to consider is the cosine of the angle between them. This yields:", "cite_spans": [ { "start": 126, "end": 144, "text": "(Bellegarda, 2008)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Emotion Classification", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C(z 1,p ,\u1e91 1, ) = z 1,p\u1e91 T 1, z 1,p \u1e91 1, ,", "eq_num": "(17)" } ], "section": "Emotion Classification", "sec_num": "5" }, { "text": "for any 1 \u2264 \u2264 L. Using 17, it is a simple matter to directly compute the relevance of the input text to each emotional category. It is important to note that word weighting is now implicitly taken into account by the LSM formalism.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Emotion Classification", "sec_num": "5" }, { "text": "In order to evaluate the latent affective framework described above, we used the data set that was developed for the SemEval 2007 task on \"Affective Text\" (Strapparava and Mihalcea, 2007) . This task was focused on the emotion classification of news headlines. Headlines typically consist of a few words and are often written by creative people with the intention to \"provoke\" emotions, and consequently attract the readers' attention. These characteristics make this kind of data particularly suitable for use in an automatic emotion recognition setting, as the affective/emotional features (if present) are guaranteed to appear in these short sentences. The test data accordingly consisted of 1,250 short news headlines 2 extracted from news web sites (such as Google news, CNN) and/or newspapers, and annotated along L = 6 emotions (ANGER, DISGUST, FEAR, JOY, SADNESS, and SURPRISE) by different evaluators.", "cite_spans": [ { "start": 155, "end": 187, "text": "(Strapparava and Mihalcea, 2007)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "6" }, { "text": "For baseline purposes, we considered the following approaches: (i) a simple word accumulation system, which annotates the emotions in a text based on the presence of words from the WordNet-Affect lexicon; and (ii) three LSA-based systems implemented as in Fig. 1 , which only differ in the way each emotion is represented in the LSA space: either based on a specific word only (e.g., JOY), or the word plus its WordNet synset, or the word plus all Word-Net synsets labelled with that emotion in WordNet-Affect (cf. (Strapparava and Mihalcea, 2007) ). In all three cases, the large corpus used for LSA processing was the Wall Street Journal text collection (Graff et al., 1995) , comprising about 86,000 articles.", "cite_spans": [ { "start": 515, "end": 547, "text": "(Strapparava and Mihalcea, 2007)", "ref_id": "BIBREF23" }, { "start": 656, "end": 676, "text": "(Graff et al., 1995)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 256, "end": 262, "text": "Fig. 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "6" }, { "text": "For the latent affective framework, we needed to select two separate training corpora. For the \"domain\" corpus, we selected a collection of about N 1 = 8, 500 relatively short English sentences (with a vocabulary of roughly M 1 = 12, 000 words) originally compiled for the purpose of a building a concatenative text-to-speech voice. Though not completely congruent with news headlines, we felt that the type and range of topics covered was close enough to serve as a good proxy for the domain. For the \"affective\" corpus, we relied on about N 2 = 5, 000 mood-annotated blog entries from LiveJournal.com, with a filtered 3 vocabulary of about M 2 = 20, 000 words. The indication of mood being explicitly specified when posting on LiveJournal, without particular coercion from the interface, moodannotated posts are likely to reflect the true mood of the blog authors (Strapparava and Mihalcea, 2008) . The moods were then mapped to the L = 6 emotions considered in the classification. Next, we formed the domain and affective matrices W 1 and W 2 and processed them as in (2) and (5). We used R 1 = 100 for the dimension of the domain space L 1 and R 2 = L = 6 for the dimension of the affective space L 2 . We then compared latent affective folding and embedding to the above systems. The results are summarized in Table I .", "cite_spans": [ { "start": 866, "end": 898, "text": "(Strapparava and Mihalcea, 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 1315, "end": 1322, "text": "Table I", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "6" }, { "text": "Consistent with the observations in (Strapparava and Mihalcea, 2008) , word accumulation secures the highest precision at the cost of the lowest recall, while LSA-based systems achieve high recall but significantly lower precision. Encouragingly, the Fmeasure obtained with both latent affective mapping techniques is substantially higher than with all four baseline approaches. Of the two techniques, latent embedding performs better, presumably because the embedded affective anchors are less sensitive than the folded affective anchors to the distribution of words within the affective corpus. Both techniques seem to exhibit an improved ability to resolve distinctions between emotional connotations.", "cite_spans": [ { "start": 36, "end": 68, "text": "(Strapparava and Mihalcea, 2008)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "6" }, { "text": "3 Extensive text pre-processing is usually required on blog entries, to address typos and assorted creative license.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Evaluation", "sec_num": "6" }, { "text": "We have proposed a data-driven strategy for emotion analysis which focuses on two coupled phases: (i) separately encapsulate both the foundations of the domain considered and the overall affective fabric of the language, and (ii) exploit the emergent relationship between these two semantic levels of description in order to inform the emotion classification process. We address (i) by leveraging the latent topicality of two distinct corpora, as uncovered by a global LSM analysis of domain-oriented and emotion-oriented training documents. The two descriptions are then superimposed to produce the desired connection between all terms and emotional categories. Because this connection automatically takes into account the influence of the entire training corpora, it is more encompassing than that based on the relatively few affective terms typically considered in conventional processing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Empirical evidence gathered on the \"Affective Text\" portion of the SemEval-2007 corpus (Strapparava and Mihalcea, 2007) shows the effectiveness of the proposed strategy. Classification performance with latent affective embedding is slightly better than with latent affective folding, presumably because of its ability to more richly describe the affective space. Both techniques outperform standard LSA-based approaches, as well as affectively weighted word accumulation. This bodes well for the general deployability of latent affective processing across a wide range of applications.", "cite_spans": [ { "start": 67, "end": 103, "text": "SemEval-2007 corpus (Strapparava and", "ref_id": null }, { "start": 104, "end": 119, "text": "Mihalcea, 2007)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Future efforts will concentrate on characterizing the influence of the parameters R 1 and R 2 on the vector spaces L 1 and L 2 , and the corresponding trade-off between modeling power and generalization properties. It is also of interest to investigate 8 how incorporating higher level units (such as common lexical compounds) into the LSM procedure might further increase performance.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Note that one entry could conceivably contribute to several such subsets.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Development data was merged into the original SemEval 2007 test set to produce a larger test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Affect Intensity Analysis of Dark Web Forums", "authors": [ { "first": "A", "middle": [], "last": "Abbasi", "suffix": "" } ], "year": 2007, "venue": "Proc. IEEE Int. Conf. Intelligence and Security Informatics (ISI)", "volume": "", "issue": "", "pages": "282--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Abbasi (2007), \"Affect Intensity Analysis of Dark Web Forums,\" in Proc. IEEE Int. Conf. Intelligence and Security Informatics (ISI), New Brunswick, NJ, 282-288.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Emotions from Text: Machine Learning for Text-Based Emotion Prediction", "authors": [ { "first": "C", "middle": [], "last": "Ovesdotter Alm", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" }, { "first": "R", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2005, "venue": "Proc. Conf. Human Language Technology and Empirical Methods in NLP", "volume": "", "issue": "", "pages": "579--586", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Ovesdotter Alm, D. Roth, and R. Sproat (2005), \"Emotions from Text: Machine Learning for Text- Based Emotion Prediction,\" in Proc. Conf. Human Language Technology and Empirical Methods in NLP, Vancouver, BC, 579-586.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Latent Semantic Mapping: A Data-Driven Framework for Modeling Global Relationships Implicit in Large Volumes of Data", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Bellegarda", "suffix": "" } ], "year": 2005, "venue": "IEEE Signal Processing Magazine", "volume": "22", "issue": "5", "pages": "70--80", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.R. Bellegarda (2005), \"Latent Semantic Mapping: A Data-Driven Framework for Modeling Global Rela- tionships Implicit in Large Volumes of Data,\" IEEE Signal Processing Magazine, 22(5):70-80.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Latent Semantic Mapping: Principles & Applications", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Bellegarda", "suffix": "" } ], "year": 2008, "venue": "Synthesis Lectures on Speech and Audio Processing Series", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.R. Bellegarda (2008), Latent Semantic Mapping: Prin- ciples & Applications, Synthesis Lectures on Speech and Audio Processing Series, Fort Collins, CO: Mor- gan & Claypool.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The Metamorphic Algorithm: A Speaker Mapping Approach to Data Augmentation", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Bellegarda", "suffix": "" }, { "first": "P", "middle": [ "V" ], "last": "Souza", "suffix": "" }, { "first": "A", "middle": [], "last": "Nadas", "suffix": "" }, { "first": "D", "middle": [], "last": "Nahamoo", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Picheny", "suffix": "" }, { "first": "L", "middle": [ "R" ], "last": "Bahl", "suffix": "" } ], "year": 1994, "venue": "IEEE Trans. Speech and Audio Processing", "volume": "2", "issue": "3", "pages": "413--420", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.R. Bellegarda, P.V. de Souza, A. Nadas, D. Nahamoo, M.A. Picheny and L.R. Bahl (1994), \"The Metamor- phic Algorithm: A Speaker Mapping Approach to Data Augmentation,\" IEEE Trans. Speech and Audio Processing, 2(3):413-420.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Lifelike talking faces for interactive services", "authors": [ { "first": "E", "middle": [], "last": "Cosatto", "suffix": "" }, { "first": "J", "middle": [], "last": "Ostermann", "suffix": "" }, { "first": "H", "middle": [ "P" ], "last": "Graf", "suffix": "" }, { "first": "J", "middle": [], "last": "Schroeter", "suffix": "" } ], "year": 2003, "venue": "Proc. IEEE", "volume": "91", "issue": "", "pages": "1406--1429", "other_ids": {}, "num": null, "urls": [], "raw_text": "E. Cosatto, J. Ostermann, H.P. Graf, and J. Schroeter (2003), \"Lifelike talking faces for interactive ser- vices,\" in Proc. IEEE, 91(9), 1406-1429.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Indexing by Latent Semantic Analysis", "authors": [ { "first": "S", "middle": [], "last": "Deerwester", "suffix": "" }, { "first": "S", "middle": [ "T" ], "last": "Dumais", "suffix": "" }, { "first": "G", "middle": [ "W" ], "last": "Furnas", "suffix": "" }, { "first": "T", "middle": [ "K" ], "last": "Landauer", "suffix": "" }, { "first": "R", "middle": [], "last": "Harshman", "suffix": "" } ], "year": 1990, "venue": "J. Amer. Soc. Information Science", "volume": "41", "issue": "", "pages": "391--407", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Deerwester, S.T. Dumais, G.W. Furnas, T.K. Landauer, and R. Harshman (1990), \"Indexing by Latent Se- mantic Analysis,\" J. Amer. Soc. Information Science, 41:391-407.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Facial Expression and Emotion", "authors": [ { "first": "P", "middle": [], "last": "Ekman", "suffix": "" } ], "year": 1993, "venue": "American Psychologist", "volume": "48", "issue": "4", "pages": "384--392", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Ekman (1993), \"Facial Expression and Emotion\", American Psychologist, 48(4), 384-392.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Fellbaum, Ed., (1998), WordNet: An Electronic Lexi- cal Database, Cambridge, MA: MIT Press.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "CSR-III Text", "authors": [ { "first": "R", "middle": [], "last": "Graff", "suffix": "" }, { "first": "D", "middle": [], "last": "Rosenfeld", "suffix": "" }, { "first": "", "middle": [], "last": "Paul", "suffix": "" } ], "year": 1995, "venue": "Linguistic Data Consortium, #LDC95T6", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Graff, R. Rosenfeld, and D. Paul (1995), \"CSR-III Text,\" Linguistic Data Consortium, #LDC95T6.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Language of Emotions: An Analysis of a Semantic Field", "authors": [ { "first": "P", "middle": [], "last": "Johnson-Laird", "suffix": "" }, { "first": "K", "middle": [], "last": "Oatley", "suffix": "" } ], "year": 1989, "venue": "Cognition and Emotion", "volume": "3", "issue": "", "pages": "81--123", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Johnson-Laird and K. Oatley (1989), \"The Language of Emotions: An Analysis of a Semantic Field,\" Cog- nition and Emotion, 3:81-123.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Using Context to Improve Emotion Detection in Spoken Dialog Systems", "authors": [ { "first": "J", "middle": [], "last": "Liscombe", "suffix": "" }, { "first": "G", "middle": [], "last": "Riccardi", "suffix": "" }, { "first": "D", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" } ], "year": 2005, "venue": "Proc. Interspeech, Lisbon, Portugal", "volume": "", "issue": "", "pages": "1845--1848", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Liscombe, G. Riccardi, and D. Hakkani-T\u00fcr (2005), \"Using Context to Improve Emotion Detection in Spo- ken Dialog Systems,\" Proc. Interspeech, Lisbon, Por- tugal, 1845-1848.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "A Model of Textual Affect Sensing Using Real-World Knowledge", "authors": [ { "first": "H", "middle": [], "last": "Liu", "suffix": "" }, { "first": "H", "middle": [], "last": "Lieberman", "suffix": "" }, { "first": "T", "middle": [], "last": "Selker", "suffix": "" } ], "year": 2003, "venue": "Proc. Intelligent User Interfaces (IUI)", "volume": "", "issue": "", "pages": "125--132", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Liu, H. Lieberman, and T. Selker (2003), \"A Model of Textual Affect Sensing Using Real-World Knowl- edge,\" in Proc. Intelligent User Interfaces (IUI), Mi- ami, FL, 125-132.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Framework for a Comprehensive Description and Measurement of Emotional States", "authors": [ { "first": "A", "middle": [], "last": "Mehrabian", "suffix": "" } ], "year": 1995, "venue": "Genetic, Social, and General Psychology Monographs", "volume": "121", "issue": "3", "pages": "339--361", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Mehrabian (1995), \"Framework for a Comprehensive Description and Measurement of Emotional States,\" Genetic, Social, and General Psychology Mono- graphs, 121(3):339-361.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Learning to Laugh (Automatically): Computational Models for Humor Recognition", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "C", "middle": [], "last": "Strapparava", "suffix": "" } ], "year": 2006, "venue": "J. Computational Intelligence", "volume": "22", "issue": "2", "pages": "126--142", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Mihalcea and C. Strapparava (2006), \"Learning to Laugh (Automatically): Computational Models for Humor Recognition,\" J. Computational Intelligence, 22(2):126-142.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Text-to-text Semantic Similarity for Automatic Short Answer Grading", "authors": [ { "first": "M", "middle": [], "last": "Mohler", "suffix": "" }, { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2009, "venue": "Proc. 12th Conf. European Chap. ACL", "volume": "", "issue": "", "pages": "567--575", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Mohler and R. Mihalcea (2009), \"Text-to-text Seman- tic Similarity for Automatic Short Answer Grading,\" in Proc. 12th Conf. European Chap. ACL, Athens, Greece, 567-575.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Opinion Mining and Sentiment Analysis", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "Foundations and Trends in Information Retrieval", "volume": "2", "issue": "", "pages": "1--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Pang and L. Lee (2008), \"Opinion Mining and Sen- timent Analysis,\" in Foundations and Trends in Infor- mation Retrieval, 2(1-2):1-135.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Affective Computing", "authors": [ { "first": "R", "middle": [ "W" ], "last": "Picard", "suffix": "" } ], "year": 1997, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R.W. Picard (1997), Affective Computing, Cambridge, MA: MIT Press.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Games for Learning and Learning from Games", "authors": [ { "first": "M", "middle": [], "last": "Pivec", "suffix": "" }, { "first": "P", "middle": [], "last": "Kearney", "suffix": "" } ], "year": 2007, "venue": "Informatica", "volume": "31", "issue": "", "pages": "419--423", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Pivec and P. Kearney (2007), \"Games for Learning and Learning from Games,\" Informatica, 31:419-423.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A Circumplex Model of Affect", "authors": [ { "first": "J", "middle": [ "A" ], "last": "Russell", "suffix": "" } ], "year": 1980, "venue": "J. Personality and Social Psychology", "volume": "39", "issue": "", "pages": "1161--1178", "other_ids": {}, "num": null, "urls": [], "raw_text": "J.A. Russell (1980), \"A Circumplex Model of Affect,\" J. Personality and Social Psychology, 39:1161-1178.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The Virtual University: The Internet and Resource-based Learning", "authors": [ { "first": "S", "middle": [], "last": "Ryan", "suffix": "" }, { "first": "B", "middle": [], "last": "Scott", "suffix": "" }, { "first": "H", "middle": [], "last": "Freeman", "suffix": "" }, { "first": "D", "middle": [], "last": "Patel", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Ryan, B. Scott, H. Freeman, and D. Patel (2000), The Virtual University: The Internet and Resource-based Learning, London, UK: Kogan Page.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Expressing Degree of Activation in Synthetic Speech", "authors": [ { "first": "M", "middle": [], "last": "Schr\u00f6der", "suffix": "" } ], "year": 2006, "venue": "IEEE Trans. Audio, Speech, and Language Processing", "volume": "14", "issue": "4", "pages": "1128--1136", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Schr\u00f6der (2006), \"Expressing Degree of Activation in Synthetic Speech,\" IEEE Trans. Audio, Speech, and Language Processing, 14(4):1128-1136.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Thematic Text Analysis: New agendas for Analyzing Text Content", "authors": [ { "first": "P", "middle": [ "J" ], "last": "Stone", "suffix": "" } ], "year": 1997, "venue": "Text Analysis for the Social Sciences: Methods for Drawing Statistical Inferences from Texts and Transcripts", "volume": "", "issue": "", "pages": "35--54", "other_ids": {}, "num": null, "urls": [], "raw_text": "P.J. Stone (1997), \"Thematic Text Analysis: New agen- das for Analyzing Text Content,\" in Text Analysis for the Social Sciences: Methods for Drawing Statistical Inferences from Texts and Transcripts, C.W. Roberts, Ed., Mahwah, NJ: Lawrence Erlbaum Assoc. Publish- ers, 35-54.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "SemEval-2007 Task 14: Affective Text", "authors": [ { "first": "C", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2007, "venue": "Proc. 4th Int. Workshop on Semantic Evaluations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Strapparava and R. Mihalcea (2007), \"SemEval-2007 Task 14: Affective Text,\" in Proc. 4th Int. Workshop on Semantic Evaluations (SemEval 2007), Prague, Czech Republic.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Learning to Identify Emotions in Text", "authors": [ { "first": "C", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2008, "venue": "Proc. 2008 ACM Symposium on Applied Computing", "volume": "", "issue": "", "pages": "1556--1560", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Strapparava and R. Mihalcea (2008), \"Learning to Identify Emotions in Text,\" in Proc. 2008 ACM Sym- posium on Applied Computing, New York, NY, 1556- 1560.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "The Affective Weight of Lexicon", "authors": [ { "first": "C", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "A", "middle": [], "last": "Valitutti", "suffix": "" }, { "first": "O", "middle": [], "last": "Stock", "suffix": "" } ], "year": 2006, "venue": "Proc. 5th Int. Conf. Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Strapparava, A. Valitutti, and O. Stock (2006), \"The Affective Weight of Lexicon,\" in Proc. 5th Int. Conf. Language Resources and Evaluation (LREC), Lisbon, Portugal.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Affect Analysis of Text Using Fuzzy Semantic Typing", "authors": [ { "first": "P", "middle": [], "last": "Subasic", "suffix": "" }, { "first": "A", "middle": [], "last": "Huettner", "suffix": "" } ], "year": 2001, "venue": "IEEE Trans. Fuzzy Systems", "volume": "9", "issue": "4", "pages": "483--496", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Subasic and A. Huettner (2001), \"Affect Analysis of Text Using Fuzzy Semantic Typing,\" IEEE Trans. Fuzzy Systems, 9(4):483-496.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The Dictionary of Affect in Language", "authors": [ { "first": "C", "middle": [ "M" ], "last": "Whissell", "suffix": "" } ], "year": 1989, "venue": "", "volume": "", "issue": "", "pages": "13--131", "other_ids": {}, "num": null, "urls": [], "raw_text": "C.M. Whissell (1989), \"The Dictionary of Affect in Lan- guage,\" in Emotion: Theory, Research, and Experi- ence, R. Plutchik and H. Kellerman, Eds., New York, NY: Academic Press, 13-131.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "uris": null, "text": "Typical LSA-Based Emotion Analysis.", "type_str": "figure" }, "FIGREF1": { "num": null, "uris": null, "text": "Proposed Latent Affective Framework.", "type_str": "figure" }, "FIGREF2": { "num": null, "uris": null, "text": "Emotion Analysis Using Latent Folding.", "type_str": "figure" }, "FIGREF3": { "num": null, "uris": null, "text": "Emotion Analysis Using Latent Embedding.", "type_str": "figure" }, "FIGREF4": { "num": null, "uris": null, "text": ".", "type_str": "figure" }, "TABREF0": { "content": "
Approach Considered | Precision | Recall | F-Measure |
Baseline Word Accumulation | 44.7 | 2.4 | 4.6 |
LSA (Specific Word Only) | 11.5 | 65.8 | 19.6 |
LSA (With WordNet Synset) | 12.2 | 77.5 | 21.1 |
LSA (With All WordNet Synsets) | 11.4 | 89.6 | 20.3 |
Latent Affective Folding | 18.8 | 90.1 | 31.1 |
Latent Affective Embedding | 20.9 | 91.7 | 34.0 |