{ "paper_id": "W11-0145", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T05:44:56.805500Z" }, "title": "Extracting Contextual Evaluativity", "authors": [ { "first": "Kevin", "middle": [], "last": "Reschke", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Santa Cruz" } }, "email": "kreschke@ucsc.edu" }, { "first": "Pranav", "middle": [], "last": "Anand", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of California", "location": { "settlement": "Santa Cruz" } }, "email": "panand@ucsc.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent work on evaluativity or sentiment in the language sciences has focused on the contributions that lexical items provide. In this paper, we discuss contextual evaluativity, stance that is inferred from lexical meaning and pragmatic environments. Focusing on assessor-grounding claims like We liked him because he so clearly disliked Margaret Thatcher, we build a corpus and construct a system employing compositional principles of evaluativity calculation to derive that we dislikes Margaret Thatcher. The resulting system has an F-score of 0.90 on our dataset, outperforming reasonable baselines, and indicating the viability of inferencing in the evaluative domain.", "pdf_parse": { "paper_id": "W11-0145", "_pdf_hash": "", "abstract": [ { "text": "Recent work on evaluativity or sentiment in the language sciences has focused on the contributions that lexical items provide. In this paper, we discuss contextual evaluativity, stance that is inferred from lexical meaning and pragmatic environments. Focusing on assessor-grounding claims like We liked him because he so clearly disliked Margaret Thatcher, we build a corpus and construct a system employing compositional principles of evaluativity calculation to derive that we dislikes Margaret Thatcher. The resulting system has an F-score of 0.90 on our dataset, outperforming reasonable baselines, and indicating the viability of inferencing in the evaluative domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A central aim of contemporary research on sentiment or evaluative language is the extraction of evaluative triples: evaluator, target, evaluation . To date, both formal (e.g., Martin and White 2005, Potts 2005 ) and computational approaches (e.g., Pang and Lee 2008) have focused on how such triples are lexically encoded (e.g., the negative affect of scoundrel or dislike). While lexical properties are a key source of evaluative information, word-based considerations alone can miss pragmatic inferences resulting from context. (1), for example, communicates that the referent of we bears not only positive stance towards the referent of him, but also negative stance towards Margaret Thatcher:", "cite_spans": [ { "start": 176, "end": 186, "text": "Martin and", "ref_id": "BIBREF6" }, { "start": 187, "end": 209, "text": "White 2005, Potts 2005", "ref_id": null }, { "start": 248, "end": 266, "text": "Pang and Lee 2008)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Contextual Evaluativity", "sec_num": "1" }, { "text": "(1) We liked him because he so clearly disliked Margaret Thatcher.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contextual Evaluativity", "sec_num": "1" }, { "text": "LEXICAL EVALUATIVITY: we, him, + ; he, M.T., -CONTEXTUAL EVALUATIVITY: we, M.T., -This paper argues for a compositional approach to contextual evaluativity similar to the compositional methods adopted for lexical evaluativity in Moilanen and Pulman (2007) and Nasukawa and Yi (2003) . At the the heart of the approach is the treatment of verbal predicates (dislike in (1)) as evaluativity functors which relate argument/entity-level evaluativity to event-level evaluativity. As discussed in \u00a72, the utitlity of such a model surfaces in cases where the event-level evaluativity is known from context, and thus new information about the contextual evaluativity of the event participants (e.g. Margaret Thatcher) can be inferred. Consequently, the empirical focus of this paper is on structures like (1), where the second clause provides grounds for the sentiment encoded in the first, and hence has a predictable event-level evaluation from the first clause's evaluator. In \u00a73 we describe the collection and annotation of a corpus of such assessment-grounding configurations from large-scale web data. This annotated corpus serves as a test bed for experimental evaluation of various implementations of the proposed compositional approach. The results of these experiments ( \u00a74) strongly support a compositional approach to contextual evaluativity inference. A simple compositional algorithm based on a small, manually created evaluativity functor lexicon demonstrated significantly better precision than non-compositional baselines. Moreover, a method for automatically expanding coverage to novel predicates based on similarity with the manually created lexicon is shown to increase recall dramatically with modest reduction in precision.", "cite_spans": [ { "start": 229, "end": 255, "text": "Moilanen and Pulman (2007)", "ref_id": "BIBREF7" }, { "start": 260, "end": 282, "text": "Nasukawa and Yi (2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Contextual Evaluativity", "sec_num": "1" }, { "text": "Evaluativity is concerned with determining private states (e.g., judgment or emotion) that a particular evaluator bears towards a target entity, event, or proposition. This may be represented as a three place an event e as coddling gives rise to two triples: AGENT(e), PATIENT(e), + and SPEAKER, e, \u2212 . 1 These lexical inferences then become part of the feature set for classifying phrasal stance (e.g., the author's overall evaluativity in a sentence). A contrasting line of research (Moilanen and Pulman 2007, Nasukawa and Yi 2003) analyzes phrasal stance as a compositional product of the polarities toward event participants. For example, the evaluative polarity of the speaker toward the event in (2a) is positively correlated with the polarity toward the subject, and negatively so in (2b).", "cite_spans": [ { "start": 485, "end": 498, "text": "(Moilanen and", "ref_id": "BIBREF7" }, { "start": 499, "end": 524, "text": "Pulman 2007, Nasukawa and", "ref_id": null }, { "start": 525, "end": 533, "text": "Yi 2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "A Framework For Inferring Contextual Polarity", "sec_num": "2" }, { "text": "(2) a. My {ally, enemy} was deprived shelter. b. My {ally, enemy} was spared a dangerous mission.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Framework For Inferring Contextual Polarity", "sec_num": "2" }, { "text": "Compositional proposals rely on mapping each n-ary predicate P an n-ary evaluativity functor Anand and Reschke (2011) argue that evaluativity functors largely group into classes, depending on whether the predicates in question entail final states of possession and/or affectedness. For example, the functors for predicates of withholding, including deprive and spare, are partial cases of the functor for lack (partiality reflects lexical idiosyncracies about e.g., deprivation and positive objects), as shown in Table 1 .", "cite_spans": [ { "start": 93, "end": 117, "text": "Anand and Reschke (2011)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 513, "end": 520, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "A Framework For Inferring Contextual Polarity", "sec_num": "2" }, { "text": "E P : D E n \u2192 D E .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Framework For Inferring Contextual Polarity", "sec_num": "2" }, { "text": "While compositional systems are designed to compute phrasal stances bottom-up, their calculi straightforwardly allow inference to participant polarities as well, assuming knowledge of the event polarity and all but one participant. Consider the sentence He disliked Margaret Thatcher. By the evaluativity conditions in Table 1 , E dislike is positive iff the evaluator has negative evaluation of Thatcher. Thus, given knowledge of the event polarity, we can infer the evaluator's stance with respect to Thatcher. In (1), this information is provided by the preceding assessing clause (+, from E like ). As the second clause serves as grounds for the assessment in the first clause, the event described in the second clause is predictably also assessed as + by the evaluator we. In our experiments we exploited this construction in particular, but the general procedure does not require it (thus, for example, evaluative adverbs such as fortunately and regrettably could provide an additional construction type). This procedure is sketched for (1) below:", "cite_spans": [], "ref_spans": [ { "start": 319, "end": 326, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "A Framework For Inferring Contextual Polarity", "sec_num": "2" }, { "text": "(3) We liked e like him because he so clearly disliked e dislike Margaret Thatcher.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Framework For Inferring Contextual Polarity", "sec_num": "2" }, { "text": "LEXICAL EVALUATIVITY: we, him, + ; he, M.T., -PRAGMATIC INFERENCE: we, e dislike , + (e dislike justifies we, him, + ) COMPOSITIONAL INFERENCE: E dislike (+, y) = + iff y = + therefore, y is regarded as +, or we, M.T., -Note that for this application, we may simplify the compositional picture and treat functors as either preservers or reversers of the polarity of the object of interest, as is done in Moilanen and Pulman (2007) and Nasukawa and Yi (2003) : preservers (such as verbs of liking) match the object polarity with the event polarity, and reverses negate it.", "cite_spans": [ { "start": 404, "end": 430, "text": "Moilanen and Pulman (2007)", "ref_id": "BIBREF7" }, { "start": 435, "end": 457, "text": "Nasukawa and Yi (2003)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "A Framework For Inferring Contextual Polarity", "sec_num": "2" }, { "text": "When the assessing clause evaluator is not affiliated with the speaker, this procedure can produce markedly different results from lexical markers (which often show speaker evaluativity). Thus, in (4), the speaker's assessment of Obama's cuts (indicated by the lexical much-needed) stands in sharp contrast with NASA's (determined by inference):", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Framework For Inferring Contextual Polarity", "sec_num": "2" }, { "text": "(4) NASA got angry at Obama because he imposed some much-needed cuts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Framework For Inferring Contextual Polarity", "sec_num": "2" }, { "text": "LEXICAL EVALUATIVITY: NASA, Obama, -; SPEAKER, some much needed cuts, + CONTEXTUAL EVALUATIVITY: NASA, some much needed cuts, -", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Framework For Inferring Contextual Polarity", "sec_num": "2" }, { "text": "The assessment-grounding configuration in (1) and (4) is highly productive. Behaviorally, implicit causality predicates (including predicates of assessment, as well as praise and scolding) are frequently understood by experimental subjects as describing an event involving the assessment target, especially when followed by because (Garvey and Carmazza, 1974; Koornneef and van Berkum, 2006) . In addition, Somasundaran and Weibe (2009) exploited a similar construction to gather reasons for people's product assessments from online reviews. These together suggest that such constructions could be simultaneously high-precision sources for evaluativity inference and easily obtainable from large corpora.", "cite_spans": [ { "start": 332, "end": 359, "text": "(Garvey and Carmazza, 1974;", "ref_id": "BIBREF3" }, { "start": 360, "end": 391, "text": "Koornneef and van Berkum, 2006)", "ref_id": "BIBREF5" }, { "start": 407, "end": 436, "text": "Somasundaran and Weibe (2009)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "A Framework For Inferring Contextual Polarity", "sec_num": "2" }, { "text": "We developed a corpus of assessment-grounding excerpts from documents across the web to evaluate the potential of the framework in \u00a72. 73 positive and 120 negative assessment predicates (like, adore, hate, loathe, etc.) were selected from the MPQA subjectivity lexicon (Wilson et al., 2005) . These were expanded accross inflectional variants to produce 826 assessment templates, half with explicit because, half without (e.g. terrified by X because he). These templates were filled with personal pronouns and the names of 26 prominent political figures and issued as websearch queries to the Yahoo! Search API. 2 A total of 440,000 webdocument results were downloaded and processed using an 1152 core Sun Microsystems blade cluster. The relevant sentences from each document were extracted, and those under 80 characters in length were parsed using the Stanford Dependency Parser. 3 This produced 60,000 parsed assessment-grounding sentences, 6,000 of which (excluding duplicates) passed the additional criterion that the grounding clause should contain a verb with a direct object. This restriction ensured that each item in our corpus had a target for contextual polarity inference. An additional 3,300 cases were excluded because the target in the grounding clause shared possible coreference with the experiencer (subject) of the assessment clause. We avoided these coreferring cases because, from the perspective of a potential application, inferences about an experiencer's stance towards himself are less valuable than inferences about his stance towards others. Finally, the list was manually shortened to include only those sentences marked as assessment-grounding configurations according to two annotators (\u03ba = 0.82); the classification task of whether this pragmatic connection occurs is beyond the scope of this paper. 57% of the data was removed in this pass, 14% from tokens with because and 43% from tokens without. Implicit causality verbs not followed by because have been shown experimentally to give rise to a much weaker preference for justification (Au, 1986) , and this is confirmed in our corpus search. The result of this procedure was a final corpus size of 1,160.", "cite_spans": [ { "start": 269, "end": 290, "text": "(Wilson et al., 2005)", "ref_id": "BIBREF13" }, { "start": 882, "end": 883, "text": "3", "ref_id": null }, { "start": 2072, "end": 2082, "text": "(Au, 1986)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Data Gathering and Annotation", "sec_num": "3" }, { "text": "The corpus was annotated for inferred contextual polarity. One of the authors and another annotator coded sentences for evaluator stance toward the object (+,-, unknown); agreement was high: \u03ba = 0.90. The 48 unresolved cases were adjudicated by a third annotator. 27 cases were uniformly judged unknown, involving predicates of change, disclosure (reveal, expose), and understanding (know). These were removed from the corpus, leaving 1,133 sentences for training and testing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Gathering and Annotation", "sec_num": "3" }, { "text": "Restricting ourselves to the assessment-grounding configuration discussed above, we treat contextual polarity inference as a binary classification problem with two inputs: the INPUT EVENT event-level polarity (derived from the assessment clause) and the main verb of the grounding clause (henceforth FUNCTOR VERB). The goal of the classifier is to correctly predict the polarity of the target NP (direct object to the functor verb) given these inputs. As mentioned in \u00a72, we may categorize the functor verbs in our lexicon into preservers and reversers. Two sources populate our lexicon. First, positively subjective verbs from the MPQA subjectivity lexicon were marked as preservers and negatively subjective verbs were marked as reversers (1249 verbs total). For example, E dislike is a reverser. Second, 487 verbs were culled from FrameNet (Ruppenhofer et al., 2005) based on their membership in six entailment classes: verbs of injury, destruction, lacking, benefit, creation, and having. Class membership was determined by identifying 124 FrameNet frames aligning with one or more classes, then manually selecting from these frames verbs whose class membership was unambiguous. Verbs of benefit, creation, and having were marked as preservers. Verbs of injury, destruction, and lacking were marked as reversers (Table 2) . Our system (SYS) classifies objects in context as follows: If the functor verb is a preserver, the target NP is assigned the same polarity as the input event polarity. If the functor verb is a reverser, the target NP is assigned the opposite of the input event polarity. This procedure is modulated by the presence of negation, as detected by a neg relation in the dependency parse. Under negation, a preserver acts like a reverser, and vice versa.", "cite_spans": [ { "start": 843, "end": 869, "text": "(Ruppenhofer et al., 2005)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 1316, "end": 1325, "text": "(Table 2)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "System and Experimental Results", "sec_num": "4" }, { "text": "We tested the performance of this system (SYS) on our annotated corpus against three baselines. The first baseline (B-Functor) attempts to determine the importance of the input event to the calculation. It thus ignores the preceding context, and attempts to classify the target object from the functor verb directly, based on the verb's polarity in the MPQA subjectivity lexicon. It has poor precision and recall, 4 reflecting both the importance of the assessment context for object polarity and the fact that the functor verbs are often not lexically sentiment bearing (e.g., predicates of possession). The second baseline (B-Input), conversely, ignores the functor verb and uses the input event polarity as listed in the MPQA lexicon (modulo negation) for object classification. The purpose of this baseline is to approximate a classifier that predicts target polarity solely from the global/contextual polarity of the preceding clause. This has sharply increased precision, indicating contextual information's importance. The third baseline (B-MLE) picked the majority object class (+), and had the highest precision, indicating the general bias in our corpus for positive objects. Table 3 shows the performance (precision vs. recall) of our system compared to the three baselines. Its precision is significantly higher, but its F-score is limited by the lower coverage of our manually constructed lexicon. SYS-MPQA and SYS-Frame show the performance of the system when the functor lexicon is limited to the MPQA and Framenet predicates, respectively. Both are high precision sources of functor prediction, and pick out somewhat distinct predicates (given the recall gain of combining them). SYS+Maj and SYS+Sim are attempts to handle the low recall of SYS caused by functor verbs in the test data which aren't in the system's lexicon. SYS+Maj simply assigns these out-of-vocabulary verbs to the majority class: preservers. SYS+Sim classifies out-of-vocabulary verbs as preservers or reversers based on their relative similarity to the known preservers and reversers selected from FrameNet -an unknown verb is categorized as a preserver if its average similarity to preservers is greater than its average similarity to reversers. Similarity was determined according to the Jiang-Conrath distance measure (Jiang and Conrath, 1997) , which based on links in WordNet (Fellbaum, 1998) . (Note: this process cannot occur for words not found in WordNet -e.g. misspellings -hence the less than perfect recall). These two systems outperform all baselines, but have indistinguishable F-scores (if misspellings are excluded, SYS+Sim has a Recall of 0.99 and F-score of 0.91).", "cite_spans": [ { "start": 2308, "end": 2333, "text": "(Jiang and Conrath, 1997)", "ref_id": "BIBREF4" }, { "start": 2368, "end": 2384, "text": "(Fellbaum, 1998)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 1186, "end": 1193, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "System and Experimental Results", "sec_num": "4" }, { "text": "Most of the precision errors incurred by our systems were syntactic: incorrect parsing, incorrect extraction of the object, or faulty negation handling (e.g., negative quantifiers or verbs). 26% of errors are due to word-sense disambiguation. The verbs spoil and own each have positive and negative uses (own can mean defeat), but only one sense was registered in the lexicon, leading to errors. The lion's share of these errors (22%) were due to the use of hate and similar expressions to convey jealousy (e.g. I was mad at him because he had both Boardwalk and Park Place). In these scenarios, although the assessment is negative, the event-level polarity of the grounding clause event type is positive (because it is desired), a fact which our current system cannot handle. One way forward would be to apply WSD techniques to distinguish jealous from non-jealous uses of predicates of dislike.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "System and Experimental Results", "sec_num": "4" }, { "text": "We have described a system for the extraction of what we termed contextual evaluativity -evaluations of objects that arise from the understanding of pragmatic inferences. This system, once we incorporate procedures to automatically infer evaluativity functor class, significantly outperforms reasonable baselines on a corpus of assessor-grounding extracts from web documents. The system operates by running a compositional approach to phrasal evaluativity in reverse, and is thus an instance of the potential computational value of such treatments of evaluativity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Here, we simplify regarding potential evaluators outside of the speaker.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "http://developer.yahoo.com/search/ 3 http://nlp.stanford.edu/software/lex-parser.shtml", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Low recall occurs when items are left unclassified due to out-of-vocabulary functor verbs. Low precision occurs when a + item is classified as -or vice versa.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Verb classes as evaluativity functor classes", "authors": [ { "first": "P", "middle": [], "last": "Anand", "suffix": "" }, { "first": "K", "middle": [], "last": "Reschke", "suffix": "" } ], "year": 2011, "venue": "Interdisciplinary Workshop on Verbs", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anand, P. and K. Reschke (2011). Verb classes as evaluativity functor classes. In Interdisciplinary Workshop on Verbs. to appear.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A verb is worth a thousand words", "authors": [ { "first": "T", "middle": [ "K" ], "last": "Au", "suffix": "" } ], "year": 1986, "venue": "Journal of Memory and Language", "volume": "25", "issue": "", "pages": "104--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Au, T. K. (1986). A verb is worth a thousand words. Journal of Memory and Language 25, 104-122.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "WordNet: An Electronic Lexical Database", "authors": [ { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fellbaum, C. (1998). WordNet: An Electronic Lexical Database. MIT Press.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Implict causality in verbs", "authors": [ { "first": "C", "middle": [], "last": "Garvey", "suffix": "" }, { "first": "A", "middle": [], "last": "Carmazza", "suffix": "" } ], "year": 1974, "venue": "Linguistic Inquiry", "volume": "5", "issue": "", "pages": "459--484", "other_ids": {}, "num": null, "urls": [], "raw_text": "Garvey, C. and A. Carmazza (1974). Implict causality in verbs. Linguistic Inquiry 5, 459-484.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Semantic similarity based on corpus statistics and lexical taxonomy", "authors": [ { "first": "J", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "C", "middle": [], "last": "Conrath", "suffix": "" } ], "year": 1997, "venue": "Proceedings of International Conference on Research in Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiang, J. and C. Conrath (1997). Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings of International Conference on Research in Computational Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "On the use of verb-based implicit causality in sentence comprehension", "authors": [ { "first": "A", "middle": [ "W" ], "last": "Koornneef", "suffix": "" }, { "first": "J", "middle": [ "J A" ], "last": "Van Berkum", "suffix": "" } ], "year": 2006, "venue": "Journal of Memory and Language", "volume": "54", "issue": "", "pages": "445--465", "other_ids": {}, "num": null, "urls": [], "raw_text": "Koornneef, A. W. and J. J. A. van Berkum (2006). On the use of verb-based implicit causality in sentence comprehension. Journal of Memory and Language 54, 445-465.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Language of Evaluation: Appraisal in English", "authors": [ { "first": "J", "middle": [ "R" ], "last": "Martin", "suffix": "" }, { "first": "P", "middle": [ "R R" ], "last": "White", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin, J. R. and P. R. R. White (2005). Language of Evaluation: Appraisal in English. Palgrave Macmillan.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Sentiment composition", "authors": [ { "first": "K", "middle": [], "last": "Moilanen", "suffix": "" }, { "first": "S", "middle": [], "last": "Pulman", "suffix": "" } ], "year": 2007, "venue": "Proceedings of RANLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Moilanen, K. and S. Pulman (2007). Sentiment composition. In Proceedings of RANLP 2007.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Sentiment analysis: Capturing favorability using natural language processing", "authors": [ { "first": "T", "middle": [], "last": "Nasukawa", "suffix": "" }, { "first": "J", "middle": [], "last": "Yi", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 2nd international conference on Knowledge capture", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nasukawa, T. and J. Yi (2003). Sentiment analysis: Capturing favorability using natural language pro- cessing. In Proceedings of the 2nd international conference on Knowledge capture.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Opinion mining and sentiment analysis", "authors": [ { "first": "B", "middle": [], "last": "Pang", "suffix": "" }, { "first": "L", "middle": [], "last": "Lee", "suffix": "" } ], "year": 2008, "venue": "Foundations and Trends in Information Retrieval", "volume": "2", "issue": "1-2", "pages": "1--135", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pang, B. and L. Lee (2008). Opinion mining and sentiment analysis. Foundations and Trends in Infor- mation Retrieval 2(1-2), 1-135.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "The Logic of Conventional Implicature", "authors": [ { "first": "C", "middle": [], "last": "Potts", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Potts, C. (2005). The Logic of Conventional Implicature. Oxford University Press.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Framenet ii: Extended theory and practice", "authors": [ { "first": "J", "middle": [], "last": "Ruppenhofer", "suffix": "" }, { "first": "M", "middle": [], "last": "Ellsworth", "suffix": "" }, { "first": "M", "middle": [ "R L" ], "last": "Petruck", "suffix": "" }, { "first": "C", "middle": [ "R" ], "last": "Johnson", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ruppenhofer, J., M. Ellsworth, M. R. L. Petruck, and C. R. Johnson (2005). Framenet ii: Extended theory and practice. Technical report, ICSI Technical Report.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Recognizing stances in online debates", "authors": [ { "first": "S", "middle": [], "last": "Somasundaran", "suffix": "" }, { "first": "J", "middle": [], "last": "Weibe", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL-47", "volume": "", "issue": "", "pages": "226--234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Somasundaran, S. and J. Weibe (2009). Recognizing stances in online debates. In Proceedings of ACL- 47, pp. 226-234.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Recognizing contextual polarity in phrase-level sentiment analysis", "authors": [ { "first": "T", "middle": [], "last": "Wilson", "suffix": "" }, { "first": "J", "middle": [], "last": "Weibe", "suffix": "" }, { "first": "P", "middle": [], "last": "Hoffman", "suffix": "" } ], "year": 2005, "venue": "Proceedings of EMNLP-05", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wilson, T., J. Weibe, and P. Hoffman (2005). Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of EMNLP-05.", "links": null } }, "ref_entries": { "TABREF0": { "text": "Evaluativity functors for verbs of having, withholding, disliking, and likingx y E have E lack E wthhld E dprv E spr E dislike E like", "content": "
+ ++---#-+
+ --++#++-
-+-+++#-+
--+--#-+-
x have/lack ya withhold/deprive/spare x of yx dislike/like y
relation, R \u2286 D e \u00d7 D \u03b1 \u00d7 D E , where \u03b1 is of variable type and E is the type of evaluative stance, assumed
here to be binary. Lexical approaches to evaluativity (see Pang and Lee 2008 for a review) have focused
on those relations that are determinable from word-internal meaning alone. For example, describing
", "html": null, "num": null, "type_str": "table" }, "TABREF1": { "text": "Examples of verbs marked as preserver/reverser and their sources", "content": "
EXAMPLECLASSSOURCE
rewardpreserverMPQA subj. lex.
hamperreverserMPQA subj. lex.
tutorpreserver (benefit) FrameNet
batterreverser (injury)FrameNet
", "html": null, "num": null, "type_str": "table" }, "TABREF2": { "text": "Performance of systems and baselines for contextual evaluativity classification", "content": "
SYSTEMPREC.RECALLF-SCORE
B-Functor0.390.240.30
B-Input0.691.00.82
B-MLE0.751.00.86
SYS0.880.570.69
SYS-MPQA0.880.240.38
SYS-Frame0.890.410.56
SYS+Maj0.821.00.90
SYS+Sim0.840.970.90
", "html": null, "num": null, "type_str": "table" } } } }