|
{ |
|
"paper_id": "W07-0104", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:38:02.111836Z" |
|
}, |
|
"title": "Active Learning for the Identification of Nonliteral Language *", |
|
"authors": [ |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Birke", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Simon Fraser University Burnaby", |
|
"location": { |
|
"postCode": "V5A 1S6", |
|
"region": "BC", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Anoop", |
|
"middle": [], |
|
"last": "Sarkar", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Simon Fraser University Burnaby", |
|
"location": { |
|
"postCode": "V5A 1S6", |
|
"region": "BC", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Bill", |
|
"middle": [], |
|
"last": "Dolan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Simon Fraser University Burnaby", |
|
"location": { |
|
"postCode": "V5A 1S6", |
|
"region": "BC", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Fred", |
|
"middle": [], |
|
"last": "Popowich", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Simon Fraser University Burnaby", |
|
"location": { |
|
"postCode": "V5A 1S6", |
|
"region": "BC", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Fass", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Simon Fraser University Burnaby", |
|
"location": { |
|
"postCode": "V5A 1S6", |
|
"region": "BC", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Simon Fraser University Burnaby", |
|
"location": { |
|
"postCode": "V5A 1S6", |
|
"region": "BC", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Yudong", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Simon Fraser University Burnaby", |
|
"location": { |
|
"postCode": "V5A 1S6", |
|
"region": "BC", |
|
"country": "Canada" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this paper we present an active learning approach used to create an annotated corpus of literal and nonliteral usages of verbs. The model uses nearly unsupervised word-sense disambiguation and clustering techniques. We report on experiments in which a human expert is asked to correct system predictions in different stages of learning: (i) after the last iteration when the clustering step has converged, or (ii) during each iteration of the clustering algorithm. The model obtains an f-score of 53.8% on a dataset in which literal/nonliteral usages of 25 verbs were annotated by human experts. In comparison, the same model augmented with active learning obtains 64.91%. We also measure the number of examples required when model confidence is used to select examples for human correction as compared to random selection. The results of this active learning system have been compiled into a freely available annotated corpus of literal/nonliteral usage of verbs in context.", |
|
"pdf_parse": { |
|
"paper_id": "W07-0104", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this paper we present an active learning approach used to create an annotated corpus of literal and nonliteral usages of verbs. The model uses nearly unsupervised word-sense disambiguation and clustering techniques. We report on experiments in which a human expert is asked to correct system predictions in different stages of learning: (i) after the last iteration when the clustering step has converged, or (ii) during each iteration of the clustering algorithm. The model obtains an f-score of 53.8% on a dataset in which literal/nonliteral usages of 25 verbs were annotated by human experts. In comparison, the same model augmented with active learning obtains 64.91%. We also measure the number of examples required when model confidence is used to select examples for human correction as compared to random selection. The results of this active learning system have been compiled into a freely available annotated corpus of literal/nonliteral usage of verbs in context.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In this paper, we propose a largely automated method for creating an annotated corpus of literal vs. nonliteral usages of verbs. For example, given the verb \"pour\", we would expect our method to identify the sentence \"Custom demands that cognac be poured from a freshly opened bottle\" as literal, and the sentence \"Salsa and rap music pour out of the windows\" as nonliteral, which, indeed, it does.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "We reduce the problem of nonliteral language recognition to one of word-sense disambiguation (WSD) by redefining literal and nonliteral as two different senses of the same word, and we adapt an existing similarity-based word-sense disambiguation method to the task of separating usages of verbs into literal and nonliteral clusters. Note that treating this task as similar to WSD only means that we use features from the local context around the verb to identify it as either literal or non-literal. It does not mean that we can use a classifier trained on WSD annotated corpora to solve this issue, or use any existing WSD classification technique that relies on supervised learning. We do not have any annotated data to train such a classifier, and indeed our work is focused on building such a dataset. Indeed our work aims to first discover reliable seed data and then bootstrap a literal/nonliteral identification model. Also, we cannot use any semi-supervised learning algorithm for WSD which relies on reliably annotated seed data since we do not possess any reliably labeled data (except for our test data set). However we do exploit a noisy source of seed data in a nearly unsupervised approach augmented with active learning. Noisy data containing example sentences of literal and nonliteral usage of verbs is used in our model to cluster a particular instance of a verb into one class or the other. This paper focuses on the use of active learning using this model. We suggest that this approach produces a large saving of effort compared to creating such an annotated corpus manually.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "An active learning approach to machine learning is one in which the learner has the ability to influence the selection of at least a portion of its training data. In our approach, a clustering algorithm for literal/nonliteral recognition tries to annotate the examples that it can, while in each iteration it sends a small set of examples to a human expert to annotate, which in turn provides additional benefit to the bootstrapping process. Our active learn-ing method is similar to the Uncertainty Sampling algorithm of (Lewis & Gale, 1994) but in our case interacts with iterative clustering. As we shall see, some of the crucial criticisms leveled against uncertainty sampling and in favor of Committee-based sampling (Engelson & Dagan, 1996) do not apply in our case, although the latter may still be more accurate in our task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 722, |
|
"end": 746, |
|
"text": "(Engelson & Dagan, 1996)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For the purposes of this paper we will take the simplified view that literal is anything that falls within accepted selectional restrictions (\"he was forced to eat his spinach\" vs. \"he was forced to eat his words\") or our knowledge of the world (\"the sponge absorbed the water\" vs. \"the company absorbed the loss\"). Nonliteral is then anything that is \"not literal\", including most tropes, such as metaphors, idioms, as well as phrasal verbs and other anomalous expressions that cannot really be seen as literal. We aim to automatically discover the contrast between the standard set of selectional restrictions for the literal usage of verbs and the non-standard set which we assume will identify the nonliteral usage.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Our identification model for literal vs. nonliteral usage of verbs is described in detail in a previous publication (Birke & Sarkar, 2006 ). Here we provide a brief description of the model so that the use of this model in our proposed active learning approach can be explained.", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 137, |
|
"text": "(Birke & Sarkar, 2006", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Since we are attempting to reduce the problem of literal/nonliteral recognition to one of word-sense disambiguation, we use an existing similarity-based word-sense disambiguation algorithm developed by (Karov & Edelman, 1998) , henceforth KE. The KE algorithm is based on the principle of attraction: similarities are calculated between sentences containing the word we wish to disambiguate (the target word) and collections of seed sentences (feedback sets). It requires a target set -the set of sentences containing the verbs to be classified into literal or nonliteral -and the seed sets: the literal feedback set and the nonliteral feedback set. A target set sentence is considered to be attracted to the feedback set containing the sentence to which it shows the highest similarity. Two sentences are similar if they contain similar words and two words are similar if they are contained in similar sentences. The resulting transitive similarity allows us to defeat the knowledge acquisition bottleneck -i.e. the low likelihood of finding all possible usages of a word in a single corpus. Note that the KE algorithm concentrates on similarities in the way sentences use the target literal or nonliteral word, not on similarities in the meanings of the sentences themselves.", |
|
"cite_spans": [ |
|
{ |
|
"start": 202, |
|
"end": 225, |
|
"text": "(Karov & Edelman, 1998)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Algorithms 1 and 2 summarize our approach. Note that p(w, s) is the unigram probability of word w in sentence s, normalized by the total number of words in s. We omit some details about the algorithm here which do not affect our discussion about active learning. These details are provided in a previous publication (Birke & Sarkar, 2006) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 316, |
|
"end": 338, |
|
"text": "(Birke & Sarkar, 2006)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "As explained before, our model requires a target set and two seed sets: the literal feedback set and the nonliteral feedback set. We do not explain the details of how these feedback sets were constructed in this paper, however, it is important to note that the feedback sets themselves are noisy and not carefully vetted by human experts. The literal feedback set was built from WSJ newswire text, and for the nonliteral feedback set, we use expressions from various datasets such as the Wayne Magnuson English Idioms Sayings & Slang and George Lakoff's Conceptual Metaphor List, as well as example sentences from these sources. These datasets provide lists of verbs that may be used in a nonliteral usage, but we cannot explicitly provide only those sentences that contain nonliteral use of that verb in the nonliteral feedback set. In particular, knowing that an expression can be used nonliterally does not mean that you can tell when it is being used nonliterally. In fact even the literal feedback set has noise from nonliteral uses of verbs in the news articles. To deal with this issue (Birke & Sarkar, 2006) provides automatic methods to clean up the feedback sets during the clustering algorithm. Note that the feedback sets are not cleaned up by human experts, however the test data is carefully annotated by human experts (details about inter-annotator agreement on the test set are provided below). The test set is not large enough to be split up into a training and test set that can support learning using a supervised learning method.", |
|
"cite_spans": [ |
|
{ |
|
"start": 1093, |
|
"end": 1115, |
|
"text": "(Birke & Sarkar, 2006)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The sentences in the target set and feedback sets were augmented with some shallow syntactic information such as part of speech tags provided Algorithm 1 KE-train: (Karov & Edelman, 1998) algorithm adapted to literal/nonliteral identification Require: S: the set of sentences containing the target word (each sentence is classified as literal/nonliteral) Require: L: the set of literal seed sentences Require: N : the set of nonliteral seed sentences Require: W: the set of words/features, w \u2208 s means w is in sentence s, s w means s contains w Require: : threshold that determines the stopping condition", |
|
"cite_spans": [ |
|
{ |
|
"start": 164, |
|
"end": 187, |
|
"text": "(Karov & Edelman, 1998)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "1: w-sim 0 (w x , w y ) := 1 if w x = w y , 0 otherwise 2: s-sim I 0 (s x , s y ) := 1, for all s x , s y \u2208 S \u00d7 S where s x = s y , 0 otherwise 3: i := 0 4: while (true) do 5: s-sim L i+1 (s x , s y ) := wx\u2208sx p(w x , s x ) max wy\u2208sy w-sim i (w x , w y ), for all s x , s y \u2208 S \u00d7 L 6: s-sim N i+1 (s x , s y ) := wx\u2208sx p(w x , s x ) max wy\u2208sy w-sim i (w x , w y ), for all s x , s y \u2208 S \u00d7 N 7: for w x , w y \u2208 W \u00d7 W do 8: w-sim i+1 (w x , w y ) := i = 0 sx wx p(w x , s x ) max sy wy s-sim I i (s x , s y ) else sx wx p(w x , s x ) max sy wy {s-sim L i (s x , s y ), s-sim N i (s x , s y )} 9:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "end for 10:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "if \u2200w x , max wy {w-sim i+1 (w x , w y ) \u2212 w-sim i (w x , w y )} \u2264 then 11:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "break # algorithm converges in 1 steps.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "12:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "end if 13:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "i := i + 1 14: end while by a statistical tagger (Ratnaparkhi, 1996) and Su-perTags (Bangalore & Joshi, 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 68, |
|
"text": "(Ratnaparkhi, 1996)", |
|
"ref_id": "BIBREF15" |
|
}, |
|
{ |
|
"start": 84, |
|
"end": 109, |
|
"text": "(Bangalore & Joshi, 1999)", |
|
"ref_id": "BIBREF0" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "This model was evaluated on 25 target verbs:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "absorb, assault, die, drag, drown, escape, examine, fill, fix, flow, grab, grasp, kick, knock, lend, miss, pass, rest, ride, roll, smooth, step, stick, strike, touch", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The verbs were carefully chosen to have varying token frequencies (we do not simply learn on frequently occurring verbs). As a result, the target sets contain from 1 to 115 manually annotated sentences for each verb to enable us to measure accuracy. The annotations were not provided to the learning algorithm: they were only used to evaluate the test data performance. The first round of annotations was done by the first annotator. The second annotator was given no instructions besides a few examples of literal and nonliteral usage (not covering all target verbs). The authors of this paper were the annotators. Our inter-annotator agreement on the annotations used as test data in the experiments in this paper is quite high. \u03ba (Cohen) and \u03ba (S&C) on a random sample of 200 annotated examples annotated by two different annotators was found to be 0.77. As per ((Di Eugenio & Glass, 2004) , cf. refs therein), the standard assessment for \u03ba values is that tentative conclusions on agreement exists when .67 \u2264 \u03ba < .8, and a definite conclusion on agreement exists when \u03ba \u2265 .8.", |
|
"cite_spans": [ |
|
{ |
|
"start": 880, |
|
"end": 892, |
|
"text": "Glass, 2004)", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the case of a larger scale annotation effort, having the person leading the effort provide one or two examples of literal and nonliteral usages for each target verb to each annotator would almost certainly improve inter-annotator agreement.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The algorithms were evaluated based on how accurately they clustered the hand-annotated sentences. Sentences that were attracted to neither cluster or were equally attracted to both were put in the opposite set from their label, making a failure to cluster a sentence an incorrect clustering. tag s x as nonliteral 6: end if precision \u2022 recall) / (precision + recall). Nonliteral precision and recall are defined similarly. Average precision is the average of literal and nonliteral precision; similarly for average recall. For overall performance, we take the f-score of average precision and average recall.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We calculated two baselines for each word. The first was a simple majority-rules baseline (assign each word to the sense which is dominant which is always literal in our dataset). Due to the imbalance of literal and nonliteral examples, this baseline ranges from 60.9% to 66.7% for different verbs with an average of 63.6%. Keep in mind though that using this baseline, the f-score for the nonliteral set will always be 0% -which is the problem we are trying to solve in this work. We calculated a second baseline using a simple attraction algorithm. Each sentence in the target set is attracted to the feedback set with which it has the most words in common. For the baseline and for our own model, sentences attracted to neither, or equally to both sets are put in the opposite cluster to which they belong. This second baseline obtains a f-score of 29.36% while the weakly supervised model without active learning obtains an f-score of 53.8%. Results for each verb are shown in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 981, |
|
"end": 989, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Literal vs. Nonliteral Identification", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "The model described thus far is weakly supervised. The main proposal in this paper is to push the results further by adding in an active learning component, which puts the model described in Section 2 in the position of helping a human expert with the literal/nonliteral clustering task. The two main points to consider are: what to send to the human annotator, and when to send it.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Active Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "We always send sentences from the undecided cluster -i.e. those sentences where attraction to either feedback set, or the absolute difference of the two attractions, falls below a given threshold. The number of sentences falling under this threshold varies considerably from word to word, so we additionally impose a predetermined cap on the number of sentences that can ultimately be sent to the human. Based on an experiment on a held-out set separate from our target set of sentences, sending a maximum of 30% of the original set was determined to be optimal in terms of eventual accuracy obtained. We impose an order on the candidate sentences using similarity values. This allows the original sentences with the least similarity to either feedback set to be sent to the human first. Further, we alternate positive similarity (or absolute difference) values and values of zero. Note that sending examples that score zero to the human may not help attract new sentences to either of the feedback sets (since scoring zero means that the sentence was not attracted to any of the sentences). However, human help may be the only chance these sentences have to be clustered at all. After the human provides an identification for a particular example we move the sentence not only into the correct cluster, but also into the corresponding feedback set so that other sentences might be attracted to this certifiably correctly classified sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Active Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "The second question is when to send the sentences to the human. We can send all the examples after the first iteration, after some intermediate iteration, distributed across iterations, or at the end. Sending everything after the first iteration is best for counteracting false attractions before they become entrenched and for allowing future iterations to learn from the human decisions. Risks include sending sentences to the human before our model has had a chance to make potentially correct decision about them, counteracting any saving of effort. (Karov & Edelman, 1998) state that the results are not likely to change much after the third iteration and we have confirmed this independently: similarity values continue to change until convergence, but cluster allegiance tends not to. Sending everything to the human after the third iteration could therefore entail some of the damage control of sending everything after the first iteration while giving the model a chance to do its best. Another possibility is to send the sentences in small doses in order to gain some bootstrapping benefit at each iteration i.e. the certainty measures will improve with each bit of human input, so at each iteration more appropriate sentences will be sent to the human. Ideally, this would produce a compounding of benefits. On the other hand, it could produce a compounding of risks. A final possibility is to wait until the last iteration in the hope that our model has correctly clustered everything else and those correctly labeled examples do not need to be examined by the human. This immediately destroys any bootstrapping possibilities for the current run, although it still provides benefits for iterative augmentation runs (see Section 4).", |
|
"cite_spans": [ |
|
{ |
|
"start": 554, |
|
"end": 577, |
|
"text": "(Karov & Edelman, 1998)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Active Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "A summary of our results in shown in Figure 1 . The last column in the graph shows the average across all the target verbs. We now discuss the various active learning experiments we performed using our model and a human expert annotator.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 37, |
|
"end": 45, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Active Learning", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Experiments were performed to determine the best time to send up to 30% of the sentences to the human annotator. Sending everything after the first iteration produced an average accuracy of 66.8%; sending everything after the third iteration, 65.2%; sending a small amount at each iteration, 60.8%; sending everything after the last iteration, 64.9%. Going just by the average accuracy, the first iteration option seems optimal. However, several of the individual word results fell catastrophically below the baseline, mainly due to original sentences having been moved into a feedback set too early, causing false attraction. This risk was compounded in the distributed case, as predicted. The third iteration option gave slightly better results (0.3%) than the last iteration option, but since the difference was minor, we opted for the stability of sending everything after the last iteration. These results show an improvement of 11.1% over the model from Section 2. Individual results for each verb are given in Figure 1 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 1017, |
|
"end": 1025, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment 1", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "In a second experiment, rather than letting our model select the sentences to send to the human, we selected them randomly. We found no significant difference in the results. For the random model to out-perform the non-random one it would have to select only sentences that our model would have clustered incorrectly; to do worse it would have to select only sentences that our model could have handled on its own. The likelihood of the random choices coming exclusively from these two sets is low.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 2", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "Our third experiment considers the effort-savings of using our literal/nonliteral identification model. The main question must be whether the 11.1% accuracy gain of active learning is worth the effort the human must contribute. In our experiments, the human annotator is given at most 30% of the sentences to classify manually. It is expected that the human will classify these correctly and any additional accuracy gain is contributed by the model. Without semi-supervised learning, we might expect that if the human were to manually classify 30% of the sentences chosen at random, he would have 30% of the sentences classified correctly. However, in order to be able to compare the human-only scenario to the active learning scenario, we must find what the average f-score of the manual process is. The f-score depends on the distribution of literal and nonliteral sentences in the original set. For example, in a set of 100 sentences, if there are exactly 50 of each, and of the 30 chosen for manual annotation, half come from the literal set and half come from the nonliteral set, the f-score will be exactly 30%. We could compare our performance to this, but that would be unfair to the manual process since the sets on which we did our evaluation were by no means balanced. We base a hypothetical scenario on the heavy imbalance often seen in our evaluation sets, and suggest a situation where 96 of our 100 sentences are literal and only 4 are nonliteral. If it were to happen that all 4 of the nonliteral sentences were sent to the human, we would get a very high f-score, due to a perfect recall score for the nonliteral cluster and a perfect precision score for the literal cluster. If none of the four nonliteral sentences were sent to the human, the scores for the nonliteral cluster would be disastrous. This situation is purely hypothetical, but should account for the fact that 30 out of 100 sentences annotated by a human will not necessarily result in an average f-score of 30%: in fact, averaging the results of the three sitatuations described above results Figure 1 : Active Learning evaluation results. Baseline refers to the second baseline from Section 2. Semisupervised: Trust Seed Data refers to the standard KE model that trusts the seed data. Optimal Semisupervised refers to the augmented KE model described in (Birke & Sarkar, 2006) . Active Learning refers to the model proposed in this paper.", |
|
"cite_spans": [ |
|
{ |
|
"start": 2339, |
|
"end": 2361, |
|
"text": "(Birke & Sarkar, 2006)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 2077, |
|
"end": 2085, |
|
"text": "Figure 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiment 3", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "in an avarage f-score of nearly 36.9%. This is 23% higher than the 30% of the balanced case, which is 1.23 times higher. For this reason, we give the human scores a boost by assuming that whatever the human annotates in the manual scenario will result in an f-score that is 1.23 times higher. For our experiment, we take the number of sentences that our active learning method sent to the human for each word -note that this is not always 30% of the total number of sentences -and multiply that by 1.23 -to give the human the benefit of the doubt, so to speak. Still we find that using active learning gives us an avarage accuracy across all words of 64.9%, while we get only 21.7% with the manual process. This means that for the same human effort, using the weakly supervised classifier produced a threefold improvement in accuracy. Looking at this conversely, this means that in order to obtain an accuracy of 64.9%, by a purely manual process, the human would have to classify nearly 53.6% of the sentences, as opposed to the 17.7% he needs to do using active learning. This is an effort-savings of about 35%. To conclude, we claim that our model combined with active learning is a helpful tool for a literal/nonliteral clustering project. It can save the human significant effort while still producing reasonable results.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiment 3", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "In this section we discuss the development of an annotated corpus of literal/nonliteral usages of verbs in context. First, we examine iterative augmentation. Then we discuss the structure and contents of the annotated corpus and the potential for expansion. After an initial run for a particular target word, we have the cluster results plus a record of the feedback sets augmented with the newly clustered sentences. ***pour*** *nonliteral cluster* wsj04:7878 N As manufacturers get bigger , they are likely to pour more money into the battle for shelf space , raising the ante for new players ./. wsj25:3283 N Salsa and rap music pour out of the windows ./. wsj06:300 U Investors hungering for safety and high yields are pouring record sums into single-premium , interest-earning annuities ./. *literal cluster* wsj59:3286 L Custom demands that cognac be poured from a freshly opened bottle ./. Each feedback set sentence is saved with a weight, with newly clustered sentences receiving a weight of 1.0. Subsequent runs may be done to augment the initial clusters. For these runs, we use the the output identification over the examples from our initial run as feedback sets. New sentences for clustering are treated like a regular target set. Running the algorithm in this way produces new clusters and a re-weighted model augmented with newly clustered sentences. There can be as many runs as desired; hence iterative augmentation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Annotated corpus built using active learning", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "We used the iterative augmentation process to build a small annotated corpus consisting of the target words from Table 1 , as well as another 25 words drawn from the examples of previously published work (see Section 5). It is important to note that in building the annotated corpus, we used the Active Learning component as described in this paper, which improved our average f-score from 53.8% to 64.9% on the original 25 target words, and we expect also improved performance on the remainder of the words in the annotated corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 113, |
|
"end": 120, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotated corpus built using active learning", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "An excerpt from the annotated corpus is shown in Figure 2 . Each entry includes an ID number and a Nonliteral, Literal, or Unannotated tag. Annotations are from testing or from active learning during annotated corpus construction. The corpus is available at http://www.cs.sfu.ca/\u223canoop/students/jbirke/. Further unsupervised expansion of the existing clusters as well as the production of additional clusters is a possibility.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 49, |
|
"end": 57, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Annotated corpus built using active learning", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "To our knowledge there has not been any previous work done on taking a model for literal/nonliteral language and augmenting it with an active learning approach which allows human expert knowledge to become part of the learning process.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our approach to active learning is similar to the Uncertainty Sampling approach of (Lewis & Gale, 1994) and (Fujii et. al., 1998) in that we pick those examples that we could not classify due to low confidence in the labeling at a particular point. We employ a resource-limited version in which only a small fixed sample is ever annotated by a human. Some of the criticisms leveled against uncertainty sampling and in favor of Committee-based sampling (Engelson & Dagan, 1996) (and see refs therein) do not apply in our case.", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 103, |
|
"text": "Gale, 1994)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 108, |
|
"end": 129, |
|
"text": "(Fujii et. al., 1998)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 452, |
|
"end": 476, |
|
"text": "(Engelson & Dagan, 1996)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our similarity measure is based on two views of sentence-and word-level similarity and hence we get an estimate of appropriate identification rather than just correct classification. As a result, by embedding an Uncertainty Sampling active learning model within a two-view clustering algorithm, we gain the same advantages as other uncertainty sampling methods obtain when used in bootstrapping methods (e.g. (Fujii et. al., 1998) ). Other machine learning approaches that derive from optimal experiment design are not appropriate in our case because we do not yet have a strong predictive (or generative) model of the literal/nonliteral distinction.", |
|
"cite_spans": [ |
|
{ |
|
"start": 409, |
|
"end": 430, |
|
"text": "(Fujii et. al., 1998)", |
|
"ref_id": "BIBREF6" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Our machine learning model only does identification of verb usage as literal or nonliteral but it can be seen as a first step towards the use of machine learning for more sophisticated metaphor and metonymy processing tasks on larger text corpora. Rule-based systems -some using a type of interlingua (Russell, 1976) ; others using complicated networks and hierarchies often referred to as metaphor maps (e.g. (Fass, 1997; Martin, 1990; Martin, 1992) -must be largely hand-coded and generally work well on an enumerable set of metaphors or in limited domains. Dictionary-based systems use existing machine-readable dictionaries and path lengths between words as one of their primary sources for metaphor processing information (e.g. (Dolan, 1995) ). Corpus-based systems primarily extract or learn the necessary metaphor-processing information from large corpora, thus avoiding the need for manual annotation or metaphor-map construction. Examples of such systems are (Murata et. al., 2000; Nissim & Markert, 2003; Mason, 2004) . Nissim & Markert (2003) approach metonymy resolution with machine learning methods, \"which [exploit] the similarity between examples of conventional metonymy\" ( (Nissim & Markert, 2003) , p. 56). They see metonymy resolution as a classification problem between the literal use of a word and a number of pre-defined metonymy types. They use similarities between possibly metonymic words (PMWs) and known metonymies as well as context similarities to classify the PMWs. Mason (2004) presents CorMet, \"a corpus-based system for discovering metaphorical mappings between concepts\" ( (Mason, 2004) , p. 23). His system finds the selectional restrictions of given verbs in particular domains by statistical means. It then finds metaphorical mappings between domains based on these selectional preferences. By finding semantic differences between the selectional preferences, it can \"articulate the higher-order structure of conceptual metaphors\" ( (Mason, 2004) , p. 24), finding mappings like LIQUID\u2192MONEY.", |
|
"cite_spans": [ |
|
{ |
|
"start": 301, |
|
"end": 316, |
|
"text": "(Russell, 1976)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 422, |
|
"text": "(Fass, 1997;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 423, |
|
"end": 436, |
|
"text": "Martin, 1990;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 437, |
|
"end": 450, |
|
"text": "Martin, 1992)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 733, |
|
"end": 746, |
|
"text": "(Dolan, 1995)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 968, |
|
"end": 990, |
|
"text": "(Murata et. al., 2000;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 991, |
|
"end": 1014, |
|
"text": "Nissim & Markert, 2003;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1015, |
|
"end": 1027, |
|
"text": "Mason, 2004)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1030, |
|
"end": 1053, |
|
"text": "Nissim & Markert (2003)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1191, |
|
"end": 1215, |
|
"text": "(Nissim & Markert, 2003)", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 1498, |
|
"end": 1510, |
|
"text": "Mason (2004)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1609, |
|
"end": 1622, |
|
"text": "(Mason, 2004)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1972, |
|
"end": 1985, |
|
"text": "(Mason, 2004)", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "Metaphor processing has even been approached with connectionist systems storing world-knowledge as probabilistic dependencies (Narayanan, 1999) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 126, |
|
"end": 143, |
|
"text": "(Narayanan, 1999)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Previous Work", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "In this paper we presented a system for separating literal and nonliteral usages of verbs through statistical word-sense disambiguation and clustering techniques. We used active learning to combine the predictions of this system with a human expert annotator in order to boost the overall accuracy of the system by 11.1%. We used the model together with active learning and iterative augmentation, to build an annotated corpus which is publicly available, and is a resource of literal/nonliteral usage clusters that we hope will be useful not only for future research in the field of nonliteral language processing, but also as training data for other statistical NLP tasks.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "6" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Supertagging: an approach to almost parsing", |
|
"authors": [ |
|
{ |
|
"first": "Srinivas", |
|
"middle": [], |
|
"last": "Bangalore", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aravind", |
|
"middle": [ |
|
"K" |
|
], |
|
"last": "Joshi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Comput. Linguist", |
|
"volume": "25", |
|
"issue": "", |
|
"pages": "237--265", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Srinivas Bangalore and Aravind K. Joshi. 1999. Supertagging: an approach to almost parsing. Comput. Linguist. 25, 2 (Jun. 1999), 237-265.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, EACL-2006", |
|
"authors": [ |
|
{ |
|
"first": "Julia", |
|
"middle": [], |
|
"last": "Birke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Anoop", |
|
"middle": [], |
|
"last": "Sarkar", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Julia Birke and Anoop Sarkar. 2006. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics, EACL-2006. Trento, Italy. April 3-7.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "The kappa statistic: a second look", |
|
"authors": [ |
|
{ |
|
"first": "Barbara", |
|
"middle": [ |
|
"Di" |
|
], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eugenio", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Glass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Comput. Linguist", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "95--101", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbara Di Eugenio and Michael Glass. 2004. The kappa statistic: a second look. Comput. Linguist. 30, 1 (Mar. 2004), 95-101.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Metaphor as an emergent property of machine-readable dictionaries", |
|
"authors": [ |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"B" |
|
], |
|
"last": "Dolan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "Proceedings of Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity, and Generativity", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "27--29", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "William B. Dolan. 1995. Metaphor as an emergent property of machine-readable dictionaries. In Proceedings of Repre- sentation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity, and Generativity (March 1995, Stanford Univer- sity, CA). AAAI 1995 Spring Symposium Series, 27-29.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Proc. of 34th Meeting of the ACL", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Sean", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ido", |
|
"middle": [], |
|
"last": "Engelson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Dagan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "319--326", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sean P. Engelson and Ido Dagan. 1996. In Proc. of 34th Meet- ing of the ACL. 319-326.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Processing metonymy and metaphor", |
|
"authors": [ |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Fass", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1997, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Dan Fass. 1997. Processing metonymy and metaphor. Green- wich, CT: Ablex Publishing Corporation.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Selective sampling for example-based word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Atsushi", |
|
"middle": [], |
|
"last": "Fujii", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takenobu", |
|
"middle": [], |
|
"last": "Tokunaga", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Kentaro", |
|
"middle": [], |
|
"last": "Inui", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hozumi", |
|
"middle": [], |
|
"last": "Tanaka", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Comput. Linguist", |
|
"volume": "24", |
|
"issue": "", |
|
"pages": "573--597", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Atsushi Fujii, Takenobu Tokunaga, Kentaro Inui and Hozumi Tanaka. 1998. Selective sampling for example-based word sense disambiguation. Comput. Linguist. 24, 4 (Dec. 1998), 573-597.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Similarity-based word sense disambiguation", |
|
"authors": [ |
|
{ |
|
"first": "Yael", |
|
"middle": [], |
|
"last": "Karov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Shimon", |
|
"middle": [], |
|
"last": "Edelman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1998, |
|
"venue": "Comput. Linguist", |
|
"volume": "24", |
|
"issue": "", |
|
"pages": "41--59", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yael Karov and Shimon Edelman. 1998. Similarity-based word sense disambiguation. Comput. Linguist. 24, 1 (Mar. 1998), 41-59.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "A sequential algorithm for training text classifiers", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "William", |
|
"middle": [ |
|
"A" |
|
], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Gale", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proc. of SIGIR-94", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David D. Lewis and William A. Gale. 1994. A sequential algo- rithm for training text classifiers. In Proc. of SIGIR-94.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "A computational model of metaphor interpretation", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1990, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James H. Martin. 1990. A computational model of metaphor interpretation. Toronto, ON: Academic Press, Inc.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Computer understanding of conventional metaphoric language", |
|
"authors": [ |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Martin", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Cognitive Science", |
|
"volume": "16", |
|
"issue": "", |
|
"pages": "233--270", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "James H. Martin. 1992. Computer understanding of conven- tional metaphoric language. Cognitive Science 16, 2 (1992), 233-270.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "CorMet: a computational, corpusbased conventional metaphor extraction system", |
|
"authors": [ |
|
{ |
|
"first": "Zachary", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Mason", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Comput. Linguist", |
|
"volume": "30", |
|
"issue": "", |
|
"pages": "23--44", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zachary J. Mason. 2004. CorMet: a computational, corpus- based conventional metaphor extraction system. Comput. Linguist. 30, 1 (Mar. 2004), 23-44.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Metonymy interpretation using x no y examples", |
|
"authors": [ |
|
{ |
|
"first": "Masaki", |
|
"middle": [], |
|
"last": "Murata", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Qing", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Atsumu", |
|
"middle": [], |
|
"last": "Yamamoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hitoshi", |
|
"middle": [], |
|
"last": "Isahara", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of SNLP2000", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Masaki Murata, Qing Ma, Atsumu Yamamoto, and Hitoshi Isa- hara. 2000. Metonymy interpretation using x no y exam- ples. In Proceedings of SNLP2000 (Chiang Mai, Thailand, 10 May 2000).", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Moving right along: a computational model of metaphoric reasoning about events", |
|
"authors": [ |
|
{ |
|
"first": "Srini", |
|
"middle": [], |
|
"last": "Narayanan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Proceedings of the 16th National Conference on Artificial Intelligence and the 11th IAAI Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "121--127", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Srini Narayanan. 1999. Moving right along: a computational model of metaphoric reasoning about events. In Proceed- ings of the 16th National Conference on Artificial Intelli- gence and the 11th IAAI Conference (Orlando, US, 1999). 121-127.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Syntactic features and word similarity for supervised metonymy resolution", |
|
"authors": [ |
|
{ |
|
"first": "Malvina", |
|
"middle": [], |
|
"last": "Nissim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katja", |
|
"middle": [], |
|
"last": "Markert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "56--63", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Malvina Nissim and Katja Markert. 2003. Syntactic features and word similarity for supervised metonymy resolution. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-03) (Sapporo, Japan, 2003). 56-63.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "A maximum entropy part-ofspeech tagger", |
|
"authors": [ |
|
{ |
|
"first": "Adwait", |
|
"middle": [], |
|
"last": "Ratnaparkhi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "Proceedings of the Empirical Methods in Natural Language Processing Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Adwait Ratnaparkhi. 1996. A maximum entropy part-of- speech tagger. In Proceedings of the Empirical Methods in Natural Language Processing Conference (University of Pennsylvania, May 17-18 1996).", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Computer understanding of metaphorically used verbs", |
|
"authors": [ |
|
{ |
|
"first": "Sylvia", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Russell", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1976, |
|
"venue": "American Journal of Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sylvia W. Russell. 1976. Computer understanding of metaphorically used verbs. American Journal of Computa- tional Linguistics, Microfiche 44.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"text": "Excerpt from our annotated corpus of literal/nonliteral usages of verbs in context.", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"content": "<table><tr><td>3:</td><td>tag s x as literal</td></tr><tr><td colspan=\"2\">4: else</td></tr><tr><td>5:</td><td/></tr></table>", |
|
"html": null, |
|
"type_str": "table", |
|
"text": "Evaluation results were recorded as recall, precision, and f-score values. Literal recall is defined as (correct literals in literal cluster / total correct literals). Literal precision is defined as (correct literals in literal cluster / size of literal cluster). If there are no literals, literal recall is 100%; literal precision is 100% if there are no nonliterals in the literal cluster and 0% otherwise. The f-score is defined as (2 \u2022 Algorithm 2 KE-test: classifying literal/nonliteral 1: For any sentence s x \u2208 S 2: if max sy s-sim L (s x , s y ) > max sy s-sim N (s x , s y ) then", |
|
"num": null |
|
} |
|
} |
|
} |
|
} |