{ "paper_id": "W10-0205", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T05:02:24.037474Z" }, "title": "A Corpus-based Method for Extracting Paraphrases of Emotion Terms", "authors": [ { "first": "Fazel", "middle": [], "last": "Keshtkar", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Ottawa Ottawa", "location": { "postCode": "K1N 6N5", "region": "ON", "country": "Canada" } }, "email": "akeshtka@site.uottawa.ca" }, { "first": "Diana", "middle": [], "last": "Inkpen", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Ottawa Ottawa", "location": { "postCode": "K1N 6N5", "region": "ON", "country": "Canada" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Since paraphrasing is one of the crucial tasks in natural language understanding and generation, this paper introduces a novel technique to extract paraphrases for emotion terms, from non-parallel corpora. We present a bootstrapping technique for identifying paraphrases, starting with a small number of seeds. Word-Net Affect emotion words are used as seeds. The bootstrapping approach learns extraction patterns for six classes of emotions. We use annotated blogs and other datasets as texts from which to extract paraphrases, based on the highest-scoring extraction patterns. The results include lexical and morpho-syntactic paraphrases, that we evaluate with human judges.", "pdf_parse": { "paper_id": "W10-0205", "_pdf_hash": "", "abstract": [ { "text": "Since paraphrasing is one of the crucial tasks in natural language understanding and generation, this paper introduces a novel technique to extract paraphrases for emotion terms, from non-parallel corpora. We present a bootstrapping technique for identifying paraphrases, starting with a small number of seeds. Word-Net Affect emotion words are used as seeds. The bootstrapping approach learns extraction patterns for six classes of emotions. We use annotated blogs and other datasets as texts from which to extract paraphrases, based on the highest-scoring extraction patterns. The results include lexical and morpho-syntactic paraphrases, that we evaluate with human judges.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Paraphrases are different ways to express the same information. Algorithms to extract and automatically identify paraphrases are of interest from both linguistic and practical points of view. Many major challenges in Natural Language Processing applications, for example multi-document summarization, need to avoid repetitive information from the input documents. In Natural Language Generation, paraphrasing is employed to create more varied and natural text. In our research, we extract paraphrases for emotions, with the goal of using them to automatically-generate emotional texts (such as friendly or hostile texts) for conversations between intelligent agents and characters in educational games. Paraphrasing is applied to generate text with more variety. To our knowledge, most current applications manually collect paraphrases for specific applications, or they use lexical resources such as WordNet (Miller et al., 1993) to identify paraphrases.", "cite_spans": [ { "start": 909, "end": 930, "text": "(Miller et al., 1993)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper introduces a novel method for extracting paraphrases for emotions from texts. We focus on the six basic emotions proposed by Ekman (1992) : happiness, sadness, anger, disgust, surprise, and fear.", "cite_spans": [ { "start": 136, "end": 148, "text": "Ekman (1992)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We describe the construction of the paraphrases extractor. We also propose a k-window algorithm for selecting contexts that are used in the paraphrase extraction method. We automatically learn patterns that are able to extract the emotion paraphrases from corpora, starting with a set of seed words. We use data sets such as blogs and other annotated corpora, in which the emotions are marked. We use a large collection of non-parallel corpora which are described in Section 3. These corpora contain many instances of paraphrases different words to express the same emotion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An example of sentence fragments for one emotion class, happiness, is shown in Table 1 . From them, the paraphrase pair that our method will extract is: \"so happy to see\" \"very glad to visit\".", "cite_spans": [], "ref_spans": [ { "start": 79, "end": 86, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In the following sections, we give an overview of related work on paraphrasing in Section 2. In Section 3 we describe the datasets used in this work. We explain the details of our paraphrase extraction method in Section 4. We present results of our evaluation and discuss our results in Section 5, and finally in Section 6 we present the conclusions and future work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "his little boy was so happy to see him princess and she were very glad to visit him Table 1 : Two sentence fragments (candidate contexts) from the emotion class happy, from the blog corpus.", "cite_spans": [], "ref_spans": [ { "start": 84, "end": 91, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Three main approaches for collecting paraphrases were proposed in the literature: manual collection, utilization of existing lexical resources, and corpusbased extraction of expressions that occur in similar contexts (Barzilay and McKeown, 2001) . Manuallycollected paraphrases were used in natural language generation (NLG) (Iordanskaja et al., 1991) . Langkilde et al. (1998) used lexical resources in statistical sentence generation, summarization, and questionanswering. Barzilay and McKeown (2001) used a corpus-based method to identify paraphrases from a corpus of multiple English translations of the same source text. Our method is similar to this method, but it extracts paraphrases only for a particular emotion, and it needs only a regular corpus, not a parallel corpus of multiple translations.", "cite_spans": [ { "start": 217, "end": 245, "text": "(Barzilay and McKeown, 2001)", "ref_id": "BIBREF2" }, { "start": 325, "end": 351, "text": "(Iordanskaja et al., 1991)", "ref_id": "BIBREF8" }, { "start": 354, "end": 377, "text": "Langkilde et al. (1998)", "ref_id": "BIBREF10" }, { "start": 475, "end": 502, "text": "Barzilay and McKeown (2001)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Some research has been done in paraphrase extraction for natural language processing and generation for different applications. Das and Smith (2009) presented a approach to decide whether two sentences hold a paraphrase relationship. They applied a generative model that generates a paraphrase of a given sentence, then used probabilistic inference to reason about whether two sentences share the paraphrase relationship. In another research, Wang et. al (2009) studied the problem of extracting technical paraphrases from a parallel software corpus. Their aim was to report duplicate bugs. In their method for paraphrase extraction, they used: sentence selection, global context-based and cooccurrence-based scoring. Also, some studies have been done in paraphrase generation in NLG (Zhao et al., 2009) , (Chevelu et al., 2009) . Bootstrapping methods have been applied to various natural language applications, for example to word sense disambiguation (Yarowsky, 1995) , lexicon construction for information extraction (Riloff and Jones, 1999) , and named entity classification (Collins and Singer, 1999) . In our research, we use the bootstrapping approach to learn paraphrases for emotions.", "cite_spans": [ { "start": 128, "end": 148, "text": "Das and Smith (2009)", "ref_id": "BIBREF6" }, { "start": 443, "end": 461, "text": "Wang et. al (2009)", "ref_id": "BIBREF21" }, { "start": 784, "end": 803, "text": "(Zhao et al., 2009)", "ref_id": "BIBREF23" }, { "start": 806, "end": 828, "text": "(Chevelu et al., 2009)", "ref_id": "BIBREF4" }, { "start": 954, "end": 970, "text": "(Yarowsky, 1995)", "ref_id": "BIBREF22" }, { "start": 1021, "end": 1045, "text": "(Riloff and Jones, 1999)", "ref_id": "BIBREF15" }, { "start": 1080, "end": 1106, "text": "(Collins and Singer, 1999)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "The text data from which we will extract paraphrases is composed of four concatenated datasets. They contain sentences annotated with the six basic emotions. The number of sentences in each dataset is presented in Table 2 . We briefly describe the datasets, as follows.", "cite_spans": [], "ref_spans": [ { "start": 214, "end": 221, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "3" }, { "text": "We used the blog corpus that Mishne collected for his research (Mishne, 2005) . The corpus contains 815,494 blog posts from Livejournal 1 , a free weblog service used by millions of people to create weblogs. In Livejournal, users are able to optionally specify their current emotion or mood. To select their emotion/mood users can choose from a list of 132 provided moods. So, the data is annotated by the user who created the blog. We selected only the texts corresponding to the six emotions that we mentioned.", "cite_spans": [ { "start": 63, "end": 77, "text": "(Mishne, 2005)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "LiveJournal blog dataset", "sec_num": "3.1" }, { "text": "This dataset (Strapparava and Mihalcea, 2007) consists of newspaper headlines that were used in the SemEval 2007-Task 14. It includes a development dataset of 250 annotated headlines, and a test dataset of 1000 news headlines. We use all of them. The annotations were made with the six basic emotions on intensity scales of [-100, 100] , therefore a threshold is used to choose the main emotion of each sentence.", "cite_spans": [ { "start": 13, "end": 45, "text": "(Strapparava and Mihalcea, 2007)", "ref_id": "BIBREF17" }, { "start": 324, "end": 335, "text": "[-100, 100]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Text Affect Dataset", "sec_num": "3.2" }, { "text": "This dataset consists in 1580 annotated sentences (Alm et al., 2005) , from tales by the Grimm brothers, H.C. Andersen, and B. Potter. The annotations used the extended set of nine basic emotions of Izard (1971) . We selected only those marked with the six emotions that we focus on.", "cite_spans": [ { "start": 50, "end": 68, "text": "(Alm et al., 2005)", "ref_id": "BIBREF0" }, { "start": 199, "end": 211, "text": "Izard (1971)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Fairy Tales Dataset", "sec_num": "3.3" }, { "text": "We also used the dataset provided by Aman and Szpakowicz (2007) . Emotion-rich sentences were selected from personal blogs, and annotated with the six emotions (as well as a non-emotion class, that we ignore here). They worked with blog posts and collected directly from the Web. First, they prepared LiveJournal 7705 1698 4758 1191 1191 3996 TextAffect 334 214 175 28 131 166 Fairy tales 445 264 216 217 113 165 Annotated blog dataset 536 173 115 115 172 179 Table 2 : The number of emotion-annotated sentences in each dataset. a list of seed words for six basic emotion categories proposed by Ekman (1992) . Then, they took words commonly used in the context of a particular emotion. Finally, they used the seed words for each category, and retrieved blog posts containing one or more of those words for the annotation process.", "cite_spans": [ { "start": 37, "end": 63, "text": "Aman and Szpakowicz (2007)", "ref_id": "BIBREF1" }, { "start": 624, "end": 636, "text": "Ekman (1992)", "ref_id": "BIBREF7" } ], "ref_spans": [ { "start": 301, "end": 496, "text": "LiveJournal 7705 1698 4758 1191 1191 3996 TextAffect 334 214 175 28 131 166 Fairy tales 445 264 216 217 113 165 Annotated blog dataset 536 173 115 115 172 179 Table 2", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Annotated Blog Dataset", "sec_num": "3.4" }, { "text": "For each of the six emotions, we run our method on the set of sentences marked with the corresponding emotion from the concatenated corpus. We start with a set of seed words form WordNet Affect (Strapparava and Valitutti, 2004) , for each emotion of interest. The number of seed words is the following: for happiness 395, for surprise 68, for fear 140, for disgust 50, for anger 250, and for sadness 200. Table 3 shows some of seeds for each category of emotion.", "cite_spans": [ { "start": 194, "end": 227, "text": "(Strapparava and Valitutti, 2004)", "ref_id": null } ], "ref_spans": [ { "start": 405, "end": 412, "text": "Table 3", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Method for Paraphrase Extraction", "sec_num": "4" }, { "text": "Since sentences are different in our datasets and they are not aligned as parallel sentences as in (Barzilay and McKeown, 2001) , our algorithm constructs pairs of similar sentences, based on the local context. On the other hand, we assume that, if the contexts surrounding two seeds look similar, then these contexts are likely to help in extracting new paraphrases. Figure 1 illustrates the high-level architecture of our paraphrase extraction method. The input to the method is a text corpus for a emotion category and a manually defined list of seed words. Before bootstrapping starts, we run the k-window algorithm on every sentence in the corpus, in order to construct candidate contexts. In Section 4.5 we explain how the bootstrapping algorithm processes and selects the paraphrases based on strong surrounding contexts. As it is shown in Figure 1 , our method has several stages: extracting candidate contexts, using them to extract patterns, selecting the best patterns, extracting potential paraphrases, and filtering them to obtain the final paraphrases.", "cite_spans": [ { "start": 99, "end": 127, "text": "(Barzilay and McKeown, 2001)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 368, "end": 376, "text": "Figure 1", "ref_id": "FIGREF0" }, { "start": 847, "end": 855, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Method for Paraphrase Extraction", "sec_num": "4" }, { "text": "During preprocessing, HTML and XML tags are eliminated from the blogs data and other datasets, then the text is tokenized and annotated with part of speech tags. We use the Stanford part-of-speech tagger and chunker (Toutanova et al., 2003) to identify noun and verb phrases in the sentences. In the next step, we use a sliding window based on the k-window approach, to identify candidate contexts that contain the target seeds.", "cite_spans": [ { "start": 216, "end": 240, "text": "(Toutanova et al., 2003)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Preprocessing", "sec_num": "4.1" }, { "text": "We use the k-window algorithm introduced by Bostad (2003) in order to identify all the tokens surrounding a specific term in a window with size of \u00b1k. Here, we use this approach to extract candidate patterns for each seed, from the sentences. We start with one seed and truncate all contexts around the seed within a window of \u00b1k words before and \u00b1k words after the seed, until all the seeds are processed. For these experiments, we set the value of k to \u00b15. Therefore Happiness: avidness, glad, warmheartedness, exalt, enjoy, comforting, joviality, amorous, joyful, like, cheer, adoring, fascinating, happy, impress, great, satisfaction, cheerful, charmed, romantic, joy, pleased, inspire, good, fulfill, gladness, merry Sadness: poor, sorry, woeful, guilty, miserable, glooming, bad, grim, tearful, glum, mourning, joyless, sadness, blue, rueful, hamed, regret, hapless, regretful, dismay, dismal, misery, godforsaken, oppression, harass, dark, sadly, attrition Anger: belligerence, envious, aggravate, resentful, abominate, murderously, greedy, hatred, disdain, envy, annoy, mad, jealousy, huffiness, sore, anger, harass, bother, enraged, hateful, irritating, hostile, outrage, devil, irritate, angry Disgust: nauseous, sicken, foul, disgust, nausea, revolt, hideous, horror, detestable, wicked, repel, offensive, repulse, yucky, repulsive, queasy, obscene, noisome Surprise: wondrous, amaze, gravel, marvel, fantastic, wonderful, surprising, marvelous, wonderment, astonish, wonder, admiration, terrific, dumfounded, trounce Fear: fearful, apprehensively, anxiously, presage, horrified, hysterical, timidity, horrible, timid, fright, hesitance, affright, trepid, horrific, unassertive, apprehensiveness, hideous, scarey, cruel, panic, scared, terror, awful, dire, fear, dread, crawl, anxious, distrust, diffidence the longest candidate contexts will have the form", "cite_spans": [ { "start": 44, "end": 57, "text": "Bostad (2003)", "ref_id": "BIBREF3" }, { "start": 722, "end": 1817, "text": "Sadness: poor, sorry, woeful, guilty, miserable, glooming, bad, grim, tearful, glum, mourning, joyless, sadness, blue, rueful, hamed, regret, hapless, regretful, dismay, dismal, misery, godforsaken, oppression, harass, dark, sadly, attrition Anger: belligerence, envious, aggravate, resentful, abominate, murderously, greedy, hatred, disdain, envy, annoy, mad, jealousy, huffiness, sore, anger, harass, bother, enraged, hateful, irritating, hostile, outrage, devil, irritate, angry Disgust: nauseous, sicken, foul, disgust, nausea, revolt, hideous, horror, detestable, wicked, repel, offensive, repulse, yucky, repulsive, queasy, obscene, noisome Surprise: wondrous, amaze, gravel, marvel, fantastic, wonderful, surprising, marvelous, wonderment, astonish, wonder, admiration, terrific, dumfounded, trounce Fear: fearful, apprehensively, anxiously, presage, horrified, hysterical, timidity, horrible, timid, fright, hesitance, affright, trepid, horrific, unassertive, apprehensiveness, hideous, scarey, cruel, panic, scared, terror, awful, dire, fear, dread, crawl, anxious, distrust, diffidence", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The k-window Algorithm", "sec_num": "4.2" }, { "text": "w 1 , w 2 , w 3 , w 4 , w 5 , seed, w 6 , w 7 , w 8 , w 9 , w 10 , w 11 .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The k-window Algorithm", "sec_num": "4.2" }, { "text": "In the next subsection, we explain what features we extract from each candidate context, to allow us to determine similar contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The k-window Algorithm", "sec_num": "4.2" }, { "text": "Previous research on word sense disambiguation on contextual analysis has acknowledged several local and topical features as good indicators of word properties. These include surrounding words and their part of speech tags, collocations, keywords in contexts (Mihalcea, 2004) . Also recently, other features have been proposed: bigrams, named entities, syntactic features, and semantic relations with other words in the context.", "cite_spans": [ { "start": 259, "end": 275, "text": "(Mihalcea, 2004)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Feature Extraction", "sec_num": "4.3" }, { "text": "We transfer the candidate phrases extracted by the sliding k-window into the vector space of features. We consider features that include both lexical and syntactic descriptions of the paraphrases for all pairs of two candidates. The lexical features include the sequence of tokens for each phrase in the paraphrase pair; the syntactic feature consists of a sequence of part-of-speech (PoS) tags where equal words and words with the same root and PoS are marked. For example, the value of the syntactic feature for the pair ''so glad to see'' and ''very happy to visit'' is \"RB 1 JJ 1 T O V B 1 \" and \"RB 1 JJ 2 T O V B 2 \", where indices indicate word equalities. However, based on the above evidences and our previous research, we also investigate other features that are well suited for our goal. Table 5 lists the features that we used for paraphrase extraction. They include some term frequency features. As an example, in Table 4 we show extracted features from a relevant context.", "cite_spans": [], "ref_spans": [ { "start": 799, "end": 806, "text": "Table 5", "ref_id": "TABREF3" }, { "start": 927, "end": 934, "text": "Table 4", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Feature Extraction", "sec_num": "4.3" }, { "text": "From each candidate context, we extracted the features as described above. Then we learn extraction patterns, in which some words might be substituted by their part-of-speech. We use the seeds to build initial patterns. Two candidate contexts that contain the same seed create one positive example. By using each initial seed, we can extract all contexts surrounding these positive examples. Then we select the stronger ones. We used Collins and Singer method (Collins and Singer, 1999) to compute the strength of each example. If we consider x as a context, the strength as a positive example of x is de-", "cite_spans": [ { "start": 460, "end": 486, "text": "(Collins and Singer, 1999)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Extracting Patterns", "sec_num": "4.4" }, { "text": "Sequence of part-of-speech F2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "Length of sequence in bytes F3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "Number of tokens F4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "Sequence of PoS between the seed and the first verb before the seed F5", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "Sequence of PoS between the seed and the first noun before the seed F6", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "First verb before the seed F7", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "First noun before the seed F8", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "Token before the seed F9 Seed F10", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "Token after the seed F11", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "First verb after the seed F12", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "First noun after the seed F13", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "Sequence of PoS between the seed and the first verb after the seed F14", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "Sequence of PoS between the seed and the first noun after the seed F15", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "Number fined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "Strength(x) = count(x+)/count(x) (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "In Equation 1, count(x+) is the number of times context x surrounded a seed in a positive example and count(x) is frequency of the context x. This allows us to score the potential pattern.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "F1", "sec_num": null }, { "text": "Our bootstrapping algorithm is summarized in Figure 2. It starts with a set of seeds, which are considered initial paraphrases. A set of extraction patterns is initially empty. The algorithm generates candidate contexts, from the aligned similar contexts. The candidate patterns are scored by how many paraphrases they can extract. Those with the highest scores are added to the set of extraction patterns. Using the extended set of extraction patterns, more paraphrase pairs are extracted and added to the set of paraphrases. Using the enlarged set of paraphrases, more extraction patterns are extracted. The process keeps iterating until no new patterns or no new paraphrases are learned.", "cite_spans": [], "ref_spans": [ { "start": 45, "end": 51, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Bootstrapping Algorithm for Paraphrase Extraction", "sec_num": "4.5" }, { "text": "Our method is able to accumulate a large lexicon of emotion phrases by bootstrapping from the manually initialized list of seed words. In each iteration, the paraphrase set is expanded with related phrases found in the corpus, which are filtered by using a measure of strong surrounding context similarity. The bootstrapping process starts by selecting a subset of the extraction patterns that aim to extract the paraphrases. We call this set the pattern pool. The phrases extracted by these patterns become candidate paraphrases. They are filtered based on how many patterns select them, in order to produce the final paraphrases from the set of candidate paraphrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Bootstrapping Algorithm for Paraphrase Extraction", "sec_num": "4.5" }, { "text": "The result of our algorithm is a set of extraction patterns and a set of pairs of paraphrases. Some of the paraphrases extracted by our system are shown in Table 6 . The paraphrases that are considered correct are shown under Correct paraphrases. As explained in the next section, two human judges agreed that these are acceptable paraphrases. The results considered incorrect by the two judges are shown un-Algorithm 1: Bootstrapping Algorithm.", "cite_spans": [], "ref_spans": [ { "start": 156, "end": 163, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results and Evaluation", "sec_num": "5" }, { "text": "Loop until no more paraphrases or no more contexts are learned. 1-Locate the seeds in each sentence 2-Find similar contexts surrounding a pair of two seeds 3-Analyze all contexts surrounding the two seeds to extract the strongest patterns 4-Use the new patterns to learn more paraphrases der Incorrect paraphrases. Our algorithm learnt 196 extraction patterns and produced 5926 pairs of paraphrases. Table 7 shows the number of extraction patterns and the number of paraphrase pairs that were produced by our algorithm for each class of emotions. For evaluation of our algorithm, we use two techniques. One uses human judges to judge if a sample of paraphrases extracted by our method are correct; we also measures the agreement between the judges (See Section 5.1). The second estimates the recall and the precision of our method (See Section 5.2. In the following subsections we describe these evaluations.", "cite_spans": [], "ref_spans": [ { "start": 400, "end": 407, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "For each seed for an emotion", "sec_num": null }, { "text": "We evaluate the correctness of the extracted paraphrase pairs, using the same method as Brazilay and McKeown (2001) . We randomly selected 600 paraphrase pairs from the lexical paraphrases produced by our algorithm: for each class of emotion we selected 100 paraphrase pairs. We evaluated their correctness with two human judges. They judged whether the two expressions are good paraphrases or not. We provided a page of guidelines for the judges. We defined paraphrase as \"approximate conceptual equivalence\", the same definition used in (Barzilay and McKeown, 2001 ). Each human judge had to choose a \"Yes\" or \"No\" answer for each pair of paraphrases under test. We did not include example sentences containing these paraphrases. A similar Machine Translation evaluation task for word-to-word translation was done in (Melamed, 2001) . Figure 3 presents the results of the evaluation: the correctness for each class of emotion according to judge A, and according to judge B. The judges were graduate students in computational linguistics, na-tive speakers of English.", "cite_spans": [ { "start": 88, "end": 115, "text": "Brazilay and McKeown (2001)", "ref_id": null }, { "start": 539, "end": 566, "text": "(Barzilay and McKeown, 2001", "ref_id": "BIBREF2" }, { "start": 819, "end": 834, "text": "(Melamed, 2001)", "ref_id": "BIBREF11" } ], "ref_spans": [ { "start": 837, "end": 845, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Evaluating Correctness with Human Judges", "sec_num": "5.1" }, { "text": "We also measured the agreement between the two judges and the Kappa coefficient (Siegel and Castellan, 1988 ). If there is complete agreement between two judges Kappa is 1, and if there is no agreement between the judges then Kappa = 0. The Kappa values and the agreement values for our judges are presented in Figure 4 .", "cite_spans": [ { "start": 80, "end": 107, "text": "(Siegel and Castellan, 1988", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 311, "end": 319, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Evaluating Correctness with Human Judges", "sec_num": "5.1" }, { "text": "The inter-judge agreement over all the paraphrases for the six classes of emotions is 81.72%, which is 490 out of the 600 paraphrases pairs in our sample. Note that they agreed that some pairs are good paraphrases, or they agreed that some pairs are not good paraphrases, that is why the numbers in Figure 4 are higher than the correctness numbers from Figure 3 . The Kappa coefficient compensates for the chance agreement. The Kappa value over all the paraphrase pairs is 74.41% which shows a significant agreement. ", "cite_spans": [], "ref_spans": [ { "start": 299, "end": 307, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 353, "end": 361, "text": "Figure 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Evaluating Correctness with Human Judges", "sec_num": "5.1" }, { "text": "Evaluating the Recall of our algorithm is difficult due to following reasons. Our algorithm is not able to cover all the English words; it can only detect Disgust 1125 12 Fear 1004 31 Anger 670 47 Happiness 1095 68 Sadness 1308 25 Surprise 724 13 Total 5926 196 Table 7 : The number of lexical and extraction patterns produced by the algorithm. paraphrasing relations with words which appeared in our corpus. Moreover, to compare directly with an electronic thesaurus such as WordNet is not feasible, because WordNet contains mostly synonym sets between words, and only a few multi-word expressions. We decided to estimate recall manually, by asking a human judge to extract paraphrases by hand from a sample of text. We randomly selected 60 texts (10 for each emotion class) and asked the judge to extract paraphrases from these sentences. For each emotion class, the judge extracted expressions that reflect the emotion, and then made pairs that were conceptually equivalent. It was not feasible to ask a second judge to do the same task, because the process is time-consuming and tedious. In Information Retrieval, Precision and Recall are defined in terms of a set of retrieved documents and a set of relevant documents 2 . In the following sections we describe how we compute the Precision and Recall for our algorithm compared to the manually extracted paraphrases. From the paraphrases that were extracted by the algorithm from the same texts, we counted how many of them were also extracted by the human judge. Equation 2 defines the Precision. On average, from 89 paraphrases extracted by the algorithm, 74 were identified as paraphrases by the human judge (84.23%). See Table 8 for the values for all the classes. P = # Correctly Retrieved P araphrases by the Algorithm All P araphrases Retrieved by the Algorithm", "cite_spans": [], "ref_spans": [ { "start": 155, "end": 291, "text": "Disgust 1125 12 Fear 1004 31 Anger 670 47 Happiness 1095 68 Sadness 1308 25 Surprise 724 13 Total 5926 196 Table 7", "ref_id": "TABREF0" }, { "start": 1702, "end": 1709, "text": "Table 8", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Estimating Recall", "sec_num": "5.2" }, { "text": "For computing the Recall we count how many of the paraphrases extracted by the human judge were correctly extracted by the algorithm (Equation 3). R = # Correctly Retrieved P araphrases by the Algorithm All P araphrases Retrieved by the Human Judge (3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Class of Emotion # Paraphrases Pairs # Extraction Patterns", "sec_num": null }, { "text": "To the best of our knowledge, no similar research has been done in extracting paraphrases for emotion terms from corpora. However, Barzilay and McKeown (2001) did similar work to corpus-based iden-tification of general paraphrases from multiple English translations of the same source text. We can compare the pros and cons of our method compared to their method. The advantages are:", "cite_spans": [ { "start": 131, "end": 158, "text": "Barzilay and McKeown (2001)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Comparison to Related Work", "sec_num": "5.3" }, { "text": "\u2022 In our method, there is no requirement for the corpus to be parallel. Our algorithm uses the entire corpus together to construct its bootstrapping method, while in (Barzilay and McKeown, 2001 ) the parallel corpus is needed in order detect positive contexts.", "cite_spans": [ { "start": 166, "end": 193, "text": "(Barzilay and McKeown, 2001", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Comparison to Related Work", "sec_num": "5.3" }, { "text": "\u2022 Since we construct the candidate contexts based on the k-window approach, there is no need for sentences to be aligned in our method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Comparison to Related Work", "sec_num": "5.3" }, { "text": "In (Barzilay and McKeown, 2001 ) sentence alignment is essential in order to recognize identical words and positive contexts.", "cite_spans": [ { "start": 3, "end": 30, "text": "(Barzilay and McKeown, 2001", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Comparison to Related Work", "sec_num": "5.3" }, { "text": "\u2022 The algorithm in (Barzilay and McKeown, 2001) has to find positive contexts first, then it looks for appropriate patterns to extract paraphrases. Therefore, if identical words do not occur in the aligned sentences, the algorithm fails to find positive contexts. But, our algorithm starts with given seeds that allow us to detect positive context with the k-window method.", "cite_spans": [ { "start": 19, "end": 47, "text": "(Barzilay and McKeown, 2001)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Discussion and Comparison to Related Work", "sec_num": "5.3" }, { "text": "A limitation of our method is the need for the initial seed words. However, obtaining these seed words is not a problem nowadays. They can be found in on line dictionaries, WordNet, and other lexical recourses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Discussion and Comparison to Related Work", "sec_num": "5.3" }, { "text": "In this paper, we introduced a method for corpusbased extraction of paraphrases for emotion terms. We showed a method that used a bootstrapping technique based on contextual and lexical features and is able to successfully extract paraphrases using a non-parallel corpus. We showed that a bootstrapping algorithm based on contextual surrounding context features of paraphrases achieves significant performance on our data set. In future work, we will extend this techniques to extract paraphrases from more corpora and for more types of emotions. In terms of evaluation, we will use the extracted paraphrases as features in machine learning classifiers that classify candidate sentences into classes of emotions. If the results of the classification are good, this mean the extracted paraphrases are of good quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "6" }, { "text": "http://www.livejournalinc.com", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Correct paraphrases: being a wicked::getting of evil; been rather sick::feeling rather nauseated; feels somewhat queasy::felt kind of sick; damn being sick::am getting sick Incorrect paraphrases: disgusting and vile::appealing and nauseated; get so sick::some truly disgusting Fear Correct paraphrases: was freaking scared::was quite frightened; just very afraid::just so scared; tears of fright::full of terror; freaking scary::intense fear; Incorrect paraphrases: serious panic attack::easily scared; not necessarily fear::despite your fear Anger Correct paraphrases: upset and angry::angry and pissed; am royally pissed::feeling pretty angry; made me mad::see me angry; do to torment::just to spite Incorrect paraphrases: very pretty annoying::very very angry; bitter and spite::tired and angry Happiness Correct paraphrases: the love of::the joy of; in great mood::in good condition; the joy of::the glad of; good feeling::good mood Incorrect paraphrases: as much eagerness::as many gladness; feeling smart::feel happy Sadness Correct paraphrases: too depressing::so sad; quite miserable::quite sorrowful; strangely unhappy::so misery; been really down::feel really sad Incorrect paraphrases: out of pity::out of misery; akward and depressing::terrible and gloomy Surprise Correct paraphrases: amazement at::surprised by; always wonder::always surprised; still astounded::still amazed; unexpected surprise::got shocked Incorrect paraphrases: passion and tremendous::serious and amazing; tremendous stress::huge shock Table 6 : Examples of paraphrases extracted by our algorithm (correctly and incorrectly).", "cite_spans": [], "ref_spans": [ { "start": 1521, "end": 1528, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Disgust", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Emotions from text: machine learning for textbased emotion prediction", "authors": [ { "first": "Cecilia", "middle": [], "last": "Ovesdotter Alm", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Sproat", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Human Language Technology Conference Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: machine learning for text- based emotion prediction. In Proceedings of the Hu- man Language Technology Conference Conference on Empirical Methods in Natural Language Processing (HLT/EMNLP 2005).", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Identifying expressions of emotion in text", "authors": [ { "first": "Saima", "middle": [], "last": "Aman", "suffix": "" }, { "first": "Stan", "middle": [], "last": "Szpakowicz", "suffix": "" } ], "year": 2007, "venue": "TSD", "volume": "", "issue": "", "pages": "196--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saima Aman and Stan Szpakowicz. 2007. Identifying expressions of emotion in text. In TSD, pages 196- 205.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Extracting paraphrases from a parallel corpus", "authors": [ { "first": "Regina", "middle": [], "last": "Barzilay", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2001, "venue": "Proceeding of ACL/EACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Regina Barzilay and Kathleen McKeown. 2001. Extract- ing paraphrases from a parallel corpus. In Proceeding of ACL/EACL, 2001, Toulouse.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Sentence Based Automatic Sentiment Classification", "authors": [ { "first": "Thorstein", "middle": [], "last": "Bostad", "suffix": "" } ], "year": 2003, "venue": "Computer Speech Text and Internet Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thorstein Bostad. 2003. Sentence Based Automatic Sen- timent Classification. Ph.D. thesis, University of Cam- bridge, Computer Speech Text and Internet Technolo- gies (CSTIT), Computer Laboratory, Jan.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Introduction of a new paraphrase generation tool based on Monte-Carlo sampling", "authors": [ { "first": "Jonathan", "middle": [], "last": "Chevelu", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Lavergne", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Lepage", "suffix": "" }, { "first": "Thierry", "middle": [], "last": "Moudenc", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL-IJCNLP 2009", "volume": "", "issue": "", "pages": "249--274", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan Chevelu, Thomas Lavergne, Yves Lepage, and Thierry Moudenc. 2009. Introduction of a new para- phrase generation tool based on Monte-Carlo sam- pling. In Proceedings of ACL-IJCNLP 2009, Singa- pore, pages 249-25.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Unsupervised models for named entity classification", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 1999, "venue": "proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Yoram Singer. 1999. Unsupervised models for named entity classification. In proceedings of the Joint SIGDAT Conference on Empirical Meth- ods in Natural Language Processing and Very Large Corpora.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Paraphrase identification as probabilistic quasi-synchronous recognition", "authors": [ { "first": "Dipanjan", "middle": [], "last": "Das", "suffix": "" }, { "first": "Noah", "middle": [ "A" ], "last": "Smith", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL-IJCNLP 2009", "volume": "", "issue": "", "pages": "468--476", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dipanjan Das and Noah A. Smith. 2009. Paraphrase identification as probabilistic quasi-synchronous recognition. In Proceedings of ACL-IJCNLP 2009, Singapore, pages 468-476.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "An argument for basic emotions", "authors": [ { "first": "Paul", "middle": [], "last": "Ekman", "suffix": "" } ], "year": 1992, "venue": "Cognition and Emotion", "volume": "6", "issue": "", "pages": "169--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Ekman. 1992. An argument for basic emotions. Cognition and Emotion, 6:169-200.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Natural Language Generation in Artificial Intelligence and Computational Linguistics", "authors": [ { "first": "L", "middle": [], "last": "Iordanskaja", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Kittredget", "suffix": "" }, { "first": "Alain", "middle": [], "last": "Polguere", "suffix": "" } ], "year": 1991, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. Iordanskaja, Richard Kittredget, and Alain Polguere, 1991. Natural Language Generation in Artificial In- telligence and Computational Linguistics. Kluwer Academic.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "The Face of Emotion", "authors": [ { "first": "Carroll", "middle": [ "E" ], "last": "Izard", "suffix": "" } ], "year": 1971, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carroll E. Izard. 1971. The Face of Emotion. Appleton- Century-Crofts., New York.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Generation that exploits corpus-based statistical knowledge", "authors": [ { "first": "Irene", "middle": [], "last": "Langkilde", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 1998, "venue": "COLING-ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Irene Langkilde and Kevin Knight. 1998. Generation that exploits corpus-based statistical knowledge. In COLING-ACL.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Empirical Methods for Exploiting Parallel Texts", "authors": [ { "first": "Melamed", "middle": [], "last": "Ilya Dan", "suffix": "" } ], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ilya Dan Melamed. 2001. Empirical Methods for Ex- ploiting Parallel Texts. MIT Press.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Co-training and self-training for word sense disambiguation", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2004, "venue": "Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea. 2004. Co-training and self-training for word sense disambiguation. In Natural Language Learning (CoNLL 2004), Boston, May.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Introduction to Wordnet: An On-Line Lexical Database", "authors": [ { "first": "George", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Beckwith", "suffix": "" }, { "first": "Christiane", "middle": [], "last": "Fellbaum", "suffix": "" }, { "first": "Derek", "middle": [], "last": "Gross", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Miller", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine Miller, 1993. Introduc- tion to Wordnet: An On-Line Lexical Database. Cog- nitive Science Laboratory, Princeton University, Au- gust.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Experiments with mood classification in blog posts", "authors": [ { "first": "Gilad", "middle": [], "last": "Mishne", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gilad Mishne. 2005. Experiments with mood classifica- tion in blog posts. ACM SIGIR.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Learning dictionaries for information extraction by multi-level bootstrapping", "authors": [ { "first": "Ellen", "middle": [], "last": "Riloff", "suffix": "" }, { "first": "Rosie", "middle": [], "last": "Jones", "suffix": "" } ], "year": 1999, "venue": "Proceedings of the Sixteenth National Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ellen Riloff and Rosie Jones. 1999. Learning dictio- naries for information extraction by multi-level boot- strapping. In Proceedings of the Sixteenth National Conference on Artificial Intelligence, page 10441049. The AAAI Press/MIT Press.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Non Parametric Statistics for Behavioral Sciences", "authors": [ { "first": "Sidney", "middle": [], "last": "Siegel", "suffix": "" }, { "first": "John", "middle": [], "last": "Castellan", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sidney Siegel and John Castellan, 1988. Non Parametric Statistics for Behavioral Sciences. . McGraw-Hill.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Semeval-2007 task 14: Affective text", "authors": [ { "first": "Carlo", "middle": [], "last": "Strapparava", "suffix": "" }, { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 4th International Workshop on the Semantic Evaluations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Carlo Strapparava and Rada Mihalcea. 2007. Semeval- 2007 task 14: Affective text. In Proceedings of the 4th International Workshop on the Semantic Evaluations (SemEval 2007), Prague, Czech Republic, June 2007.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Wordnet-affect: an affective extension of wordnet", "authors": [], "year": 2004, "venue": "Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004)", "volume": "", "issue": "", "pages": "1083--1086", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wordnet-affect: an affective extension of wordnet. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, May 2004, pages 1083-1086.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Feature-rich part-of-speech tagging with a cyclic dependency network", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "Proceedings of HLT-NAACL", "volume": "", "issue": "", "pages": "252--259", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Pro- ceedings of HLT-NAACL, pages 252-259.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Extracting paraphrases of technical terms from noisy parallel software corpora", "authors": [ { "first": "Xiaoyin", "middle": [], "last": "Wang", "suffix": "" }, { "first": "David", "middle": [], "last": "Lo", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Jiang", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Hong", "middle": [], "last": "Mei", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL-IJCNLP 2009", "volume": "", "issue": "", "pages": "197--200", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiaoyin Wang, David Lo, Jing Jiang, Lu Zhang, and Hong Mei. 2009. Extracting paraphrases of tech- nical terms from noisy parallel software corpora. In Proceedings of ACL-IJCNLP 2009, Singapore, pages 197-200.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Unsupervised word sense disambiguation rivaling supervised methods", "authors": [ { "first": "David", "middle": [], "last": "Yarowsky", "suffix": "" } ], "year": 1995, "venue": "Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "189--196", "other_ids": {}, "num": null, "urls": [], "raw_text": "David Yarowsky. 1995. Unsupervised word sense dis- ambiguation rivaling supervised methods. In Proceed- ings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 189-196.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Application-driven statistical paraphrase generation", "authors": [ { "first": "Shiqi", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Lan", "suffix": "" }, { "first": "Ting", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2009, "venue": "Proceedings of ACL-IJCNLP 2009", "volume": "", "issue": "", "pages": "834--842", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shiqi Zhao, Xiang Lan, Ting Liu, , and Sheng Li. 2009. Application-driven statistical paraphrase gen- eration. In Proceedings of ACL-IJCNLP 2009, Singa- pore, pages 834-842.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": "High-level view of the paraphrase extraction method.", "type_str": "figure" }, "FIGREF1": { "uris": null, "num": null, "text": "Candidate context: He was further annoyed by the jay bird 'PRP VBD RB VBN IN DT NN NN',65,8,'VBD RB',?,was, ?,?,?,He/PRP,was/VBD,further/RB,annoyed,by/IN,the/DT, jay/NN,bird/NN,?,?,jay,?,'IN DT NN',2,2,0,1", "type_str": "figure" }, "FIGREF2": { "uris": null, "num": null, "text": "Our bootstrapping algorithm for extracting paraphrases.", "type_str": "figure" }, "FIGREF3": { "uris": null, "num": null, "text": "The correctness results according the judge A and judge B, for each class of emotion.", "type_str": "figure" }, "FIGREF4": { "uris": null, "num": null, "text": "The Kappa coefficients and the agreement between the two human judges.", "type_str": "figure" }, "TABREF0": { "html": null, "content": "