{ "paper_id": "W09-0208", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:39:49.533531Z" }, "title": "Paraphrase assessment in structured vector space: Exploring parameters and datasets", "authors": [ { "first": "Katrin", "middle": [], "last": "Erk", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Texas at Austin", "location": {} }, "email": "katrin.erk@mail.utexas.edu" }, { "first": "Sebastian", "middle": [], "last": "Pad\u00f3", "suffix": "", "affiliation": { "laboratory": "", "institution": "Stanford University", "location": {} }, "email": "pado@stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The appropriateness of paraphrases for words depends often on context: \"grab\" can replace \"catch\" in \"catch a ball\", but not in \"catch a cold\". Structured Vector Space (SVS) (Erk and Pad\u00f3, 2008) is a model that computes word meaning in context in order to assess the appropriateness of such paraphrases. This paper investigates \"best-practice\" parameter settings for SVS, and it presents a method to obtain large datasets for paraphrase assessment from corpora with WSD annotation.", "pdf_parse": { "paper_id": "W09-0208", "_pdf_hash": "", "abstract": [ { "text": "The appropriateness of paraphrases for words depends often on context: \"grab\" can replace \"catch\" in \"catch a ball\", but not in \"catch a cold\". Structured Vector Space (SVS) (Erk and Pad\u00f3, 2008) is a model that computes word meaning in context in order to assess the appropriateness of such paraphrases. This paper investigates \"best-practice\" parameter settings for SVS, and it presents a method to obtain large datasets for paraphrase assessment from corpora with WSD annotation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The meaning of individual occurrences or tokens of a word can change vastly according to its context. A central challenge for computational lexical semantics is describe these token meanings and how they can be computed for new occurrences.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "One prominent approach to this question is the dictionary-based model of token meaning: The different meanings of a word are a set of distinct, disjoint senses enumerated in a lexicon or ontology, such as WordNet. For each new occurrence, determining token meaning means choosing one of the senses, a classification task known as Word Sense Disambiguation (WSD). Unfortunately, this task has turned out to be very hard both for human annotators and for machines (Kilgarriff and Rosenzweig, 2000) , not at least due to granularity problems with available resources (Palmer et al., 2007; McCarthy, 2006) . Some researchers have gone so far as to suggest fundamental problems with the concept of categorical word senses (Kilgarriff, 1997; Hanks, 2000) .", "cite_spans": [ { "start": 462, "end": 495, "text": "(Kilgarriff and Rosenzweig, 2000)", "ref_id": "BIBREF7" }, { "start": 564, "end": 585, "text": "(Palmer et al., 2007;", "ref_id": "BIBREF22" }, { "start": 586, "end": 601, "text": "McCarthy, 2006)", "ref_id": "BIBREF15" }, { "start": 717, "end": 735, "text": "(Kilgarriff, 1997;", "ref_id": null }, { "start": 736, "end": 748, "text": "Hanks, 2000)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "An interesting alternative is offered by vector space models of word meaning (Lund and Burgess, 1996; Mc-Donald and Brew, 2004) which characterize the meaning of a word entirely without reference to word senses. Word meaning is described in terms of a vector in a highdimensional vector space that is constructed with distributional methods. Semantic similarity is then simply distance to vectors of other words. Vector space models have been most successful in modeling the meaning of word types (i.e. in constructing type vectors). The characterization of token meaning by corresponding token vectors would represent a very interesting alternative to dictionary-based methods by providing a direct, graded, unsupervised measure of (dis-)similarity between words in context that completely avoids reference to dictionary senses. However, there are still considerable theoretical and practical problems, even though there is a substantial body of work (Landauer and Dumais, 1997; Sch\u00fctze, 1998; Kintsch, 2001; Mitchell and Lapata, 2008) .", "cite_spans": [ { "start": 77, "end": 101, "text": "(Lund and Burgess, 1996;", "ref_id": "BIBREF13" }, { "start": 102, "end": 127, "text": "Mc-Donald and Brew, 2004)", "ref_id": null }, { "start": 952, "end": 979, "text": "(Landauer and Dumais, 1997;", "ref_id": "BIBREF10" }, { "start": 980, "end": 994, "text": "Sch\u00fctze, 1998;", "ref_id": "BIBREF24" }, { "start": 995, "end": 1009, "text": "Kintsch, 2001;", "ref_id": null }, { "start": 1010, "end": 1036, "text": "Mitchell and Lapata, 2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In a recent paper (Erk and Pad\u00f3, 2008) , we have introduced the structured vector space ( SVS) model which addresses this challenge. It yields one token vector per input word. Token vectors are not computed by combining the lexical meaning of the surrounding wordswhich risks resulting in a \"topicality\" vector -but by modifying the type meaning of a word with the semantic expectations of syntactically related words, which can be thought of as selectional preferences. For example, in catch a ball, the token vector for ball is computed by combining the type vector of ball with a vector for the selectional preferences of catch for its object. The token vector for catch, conversely, is constructed from the type vector of catch and the inverse object preference vector of ball. The resulting token vectors describe the meaning of a word in a particular sentence not through a sense label, but through the distance of the token vector to other vectors.", "cite_spans": [ { "start": 18, "end": 38, "text": "(Erk and Pad\u00f3, 2008)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "A natural question that arises is how vector-based models of token meaning can be evaluated. It is of course possible to apply them to a traditional WSD task. However, this strategy remains vulnerable to all criticism concerning the annotation of categorical word senses, and also does not take advantage of the vector models' central asset, namely gradedness. Thus, paraphrase-based assessment for models of token meaning was proposed as a representation-neutral disambiguation task that can replace WSD (McCarthy and Navigli, 2007; Mitchell and Lapata, 2008) . Given a word token in context and a set of potential paraphrases, the task consists of identifying the subset of valid paraphrases. For example, in the following example, the first paraphrase is appropriate, but the second is not:", "cite_spans": [ { "start": 505, "end": 533, "text": "(McCarthy and Navigli, 2007;", "ref_id": "BIBREF14" }, { "start": 534, "end": 560, "text": "Mitchell and Lapata, 2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Google acquired YouTube \u21d2 Google bought YouTube", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "(2) How children acquire skills \u21d2 How children buy skills", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This task is graded in the sense that there is no disjoint set of labels from which exactly one is picked for each token; rather, the paraphrases form a set of labels of which a subset is appropriate for each word token, and the appropriate sets for two tokens may overlap to varying degrees. In an ideal vector-based model, valid paraphrases such as (1) should possess similar vectors, and invalid ones such as (2) dissimilar ones.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In Erk and Pad\u00f3 (2008) , we evaluated SVS on two variants of the paraphrase assessment test: first, the prediction of human judgments on a seven-point scale for paraphrases for verb-subject pairs (Mitchell and Lapata, 2008) ; and second, the original Lexical Substitution task by McCarthy and Navigli (2007) . To avoid overfitting, we optimized our parameters on the first dataset and evaluated only the best model on the second dataset. However, given evidence for substantial inter-task differences, it is unclear to what extent these parameters are optimal beyond the Mitchell and Lapata dataset. This paper addresses this question with two experiments: Impact of parameters. We re-examine three central parameters of SVS. The first one is the choice of vector combination function. Following Mitchell and Lapata (2008) , we previously used componentwise multiplication, whose interpretation in vector space is not straightforward. The second one is reweighting. We obtained the best performance when the context expectations were reweighted by taking each component to a (high) n-th power, which is counterintuitive. Finally, we found subjects to be more informative in judging the appropriateness of paraphrases than objects. This appears to contradict work in theoretical syntax (Levin and Rappaport Hovav, 2005) .", "cite_spans": [ { "start": 3, "end": 22, "text": "Erk and Pad\u00f3 (2008)", "ref_id": "BIBREF2" }, { "start": 196, "end": 223, "text": "(Mitchell and Lapata, 2008)", "ref_id": "BIBREF19" }, { "start": 280, "end": 307, "text": "McCarthy and Navigli (2007)", "ref_id": "BIBREF14" }, { "start": 796, "end": 822, "text": "Mitchell and Lapata (2008)", "ref_id": "BIBREF19" }, { "start": 1285, "end": 1318, "text": "(Levin and Rappaport Hovav, 2005)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To reassess the role of these parameters, we construct a controlled dataset of transitive instances from the Lexical Substitution corpus to reexamine and investigate these issues, with the aim of providing \"best practice\" settings for SVS. This turns out to be more difficult than expected, leading us to suspect that a globally optimal parameter setting across tasks may simply not exist. We also test a simple extension of SVS that uses a richer context (both subject and object) to construct the token vector, with first positive results.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Dataset creation. The Lexical Substitution dataset used in Erk and Pad\u00f3 (2008) was very small, which limits the conclusions that can be drawn from it. This points towards a more general problem of paraphrase-based assessment for models of token meaning: Until now, all datasets for this task were specifically created by hand. It would provide a strong boost for paraphrase assessment if the large annotated corpora that are available for WSD could be reused.", "cite_spans": [ { "start": 59, "end": 78, "text": "Erk and Pad\u00f3 (2008)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present an experiment on converting the WordNetannotated SemCor corpus into a set of \"pseudoparaphrases\" for paraphrase-based assessment. We use the synonyms and direct hypernyms of an annotated synset as these \"pseudo-paraphrases\". While the synonyms and hypernyms are not guaranteed to work as direct replacements of the target word in the given context, they are semantically similar to the target word. The result is a dataset ten times larger than the Lex-Sub dataset. As we describe in this paper, we find that this method is nevertheless problematic: The resulting dataset is considerably more difficult to model than the existing hand-built paraphrase corpora, and its properties differ considerably from the manually constructed Lexical Substitution dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The main intuition behind the SVS model is to treat the interpretation of a word in context as guided by expectations about typical events. This move to include typical arguments and predicates into a model of word meaning is motivated both on cognitive and linguistic grounds. In cognitive science, the central role of expectations about typical events on almost all aspects of human language processing is well-established (McRae et al., 1998; Narayanan and Jurafsky, 2002) . In linguistics, expectations have long been used in semantic theories in the form of selectional restrictions and selectional preferences (Wilks, 1975) , and more recently induced from corpora (Resnik, 1996) . Attention has mostly been limited to selectional preferences of verbs, which have been used for for a variety of tasks (Hindle and Rooth, 1993; Gildea and Jurafsky, 2002) . A recent result that the SVS model builds on is that selectional preferences can be represented as prototype vectors constructed from seen arguments (Erk, 2007; Pad\u00f3 et al., 2007) .", "cite_spans": [ { "start": 425, "end": 445, "text": "(McRae et al., 1998;", "ref_id": "BIBREF17" }, { "start": 446, "end": 475, "text": "Narayanan and Jurafsky, 2002)", "ref_id": "BIBREF20" }, { "start": 616, "end": 629, "text": "(Wilks, 1975)", "ref_id": "BIBREF26" }, { "start": 671, "end": 685, "text": "(Resnik, 1996)", "ref_id": "BIBREF23" }, { "start": 807, "end": 831, "text": "(Hindle and Rooth, 1993;", "ref_id": "BIBREF6" }, { "start": 832, "end": 858, "text": "Gildea and Jurafsky, 2002)", "ref_id": "BIBREF4" }, { "start": 1010, "end": 1021, "text": "(Erk, 2007;", "ref_id": "BIBREF3" }, { "start": 1022, "end": 1040, "text": "Pad\u00f3 et al., 2007)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "Representing lemma meaning. To accommodate information about semantic expectations, the SVS model extends the traditional representation of word meaning as a single vector by a set of vectors, each of which represents the word's selectional preferences for each relation that the word can assume in its linguistic context. While we ultimately think of these relations as \"properly semantic\" in the sense of semantic roles, the instantiation of SVS we consider in this paper makes use of dependency relations as a level of representation that generalizes over a substantial amount of surface variation but that can be obtained automatically with high accuracy using current NLP tools.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "The idea is illustrated in Figure 1 . In the representation of the verb catch, the central square stands for the lexical vector of catch itself. The three arrows link it to catch 's preferences for dependency relations it can participate in, such as for its subjects, its objects, and for verbs for which it appears as a complement (comp \u22121 ). The figure shows the head words that enter into the computation of the selectional preference vector. Likewise, ball is represented by one vector for ball itself, one for ball 's preferences for its modifiers (mod), and two for the verbs of which it can occur as a subject (subj \u22121 ) and an object (obj \u22121 ), respectively.", "cite_spans": [], "ref_spans": [ { "start": 27, "end": 35, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "This representation includes selectional preferences (like subj, obj, mod) exactly parallel to inverse selectional preferences (subj \u22121 , obj \u22121 , comp \u22121 ). The SVS model is then formalized as follows. Let D be a vector space, and let R be some set of relation labels. We then ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "m(w) = (v w , R, R \u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "where v w \u2208 D is the type vector of the word w itself, R : R \u2192 D maps each relation label onto a vector that describes w's selectional preferences, and R \u22121 : R \u2192 D maps from role labels to vectors describing inverse selectional preferences of w. Both R and R \u22121 are partial functions. For example, the direct object preference is undefined for intransitive verbs. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "Computing meaning in context. SVS computes the meaning of a word a in the context of another word b via their selectional preferences as follows: Let ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "m(a) = (v a , R a , R \u22121 a ) and m(b) = (v b , R b , R \u22121 b )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "m(a r \u2192 b) = v a R \u22121 b (r), R a \u2212 {r}, R \u22121 a m(b r \u22121 \u2192 a) = v b R a (r), R b , R \u22121 b \u2212 {r} (3) where v 1 v", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "2 is a direct vector combination function as in traditional models, e.g. addition or componentwise multiplication. If either R a (r) or R \u22121 b (r) are not defined, the combination fails. Afterward, the filled argument position r is deleted from R a and R \u22121 b . Figure 2 illustrates the procedure on the representations from Figure 1 . The dotted lines indicate that the lexical vector for catch is combined with the inverse object preference of ball. Likewise, the lexical vector for ball combines with the object preference vector of catch.", "cite_spans": [], "ref_spans": [ { "start": 262, "end": 270, "text": "Figure 2", "ref_id": "FIGREF3" }, { "start": 325, "end": 333, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "Recursive application. In Erk and Pad\u00f3 (2008) , we considered only one combination step; however, the 1 We use separate functions R, R \u22121 rather than a joint syntactic context preference function because (a) this separation models the conceptual difference between predicates and arguments, and (b) it allows for a simpler, more elegant formulation of the computation of meaning in context in Eq. 3. and m(b", "cite_spans": [ { "start": 26, "end": 45, "text": "Erk and Pad\u00f3 (2008)", "ref_id": "BIBREF2" }, { "start": 102, "end": 103, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "r \u22121 \u2192 a). If a also stands in relation s = r to a word c with m(c) = (v a , R a , R \u22121 a )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": ", we define the meaning of a in the context of b and c canonically as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "m(m(a r \u2192 b) s \u2192 c) = (v a R \u22121 b (r)) R \u22121 c (s), R a \u2212 {r, s}, R \u22121 a (4)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "If is associative and commutative, then", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "m(m(a r \u2192 b) s \u2192 c) = m(m(a s \u2192 c) r \u2192 b", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "). This will be the case for all the combination functions we use in this paper.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "Note that this is a simplistic model of the influence of multiple context words: it computes only lexical meaning recursively, but does not model the influence of context on the selectional preferences. For example, the subject selectional preferences of catch are identical to those of catch the ball, even though one would expect that the outfielder corresponds much better to the expectations of catch the ball than of just catch.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The structured vector space model", "sec_num": "2" }, { "text": "The task that we are considering is paraphrase assessment in context. Given a predicate-argument pair and a paraphrase candidate, the models have to decide how appropriate the paraphrase is for the predicate-argument combination. This is the main task against which token vector models have been evaluated in the past (Mitchell and Lapata, 2008; Erk and Pad\u00f3, 2008) . In Experiment 1, we use manually created paraphrases. In Experiment 2, we replaces human-generated paraphrases with \"pseudo-paraphrases\", contextually similar words that may not be completely appropriate as paraphrases in the given context, but can be collected automatically. Our parameter choices for SVS are as similar as possible to the second experiment of our earlier paper.", "cite_spans": [ { "start": 318, "end": 345, "text": "(Mitchell and Lapata, 2008;", "ref_id": "BIBREF19" }, { "start": 346, "end": 365, "text": "Erk and Pad\u00f3, 2008)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "Vector space. We use a dependency-based vector space that counts a target word and a context word as co-occurring in a sentence if they are connected by an \"informative\" path in the dependency graph for the sentence. 2 We build the space from a Minipar-parsed version of the British National Corpus with dependency parses obtained from Minipar (Lin, 1993) . It uses raw co-occurrence counts and 2000 dimensions.", "cite_spans": [ { "start": 217, "end": 218, "text": "2", "ref_id": null }, { "start": 344, "end": 355, "text": "(Lin, 1993)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "Selectional preferences and reweighting. We use a prototype-based selectional preference model (Erk, 2007) . It models the selectional preferences of a predicate for an argument position as the weighted centroid of the vectors for all head words seen for this position in a large corpus. Let f (a, r, b) denote the frequency of a occurring in relation r to b in the parsed BNC. Then, we compute the selectional preferences as:", "cite_spans": [ { "start": 95, "end": 106, "text": "(Erk, 2007)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "R b (r) = 1 N a:f (a,r,b)>0 f (a, r, b) \u2022 v a (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "where N is the number of fillers a with f (a, r, b) > 0.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "In Erk and Pad\u00f3 (2008) , we found that applying a reweighting step to the selectional preference vector by taking each component of the centroid vector R b (r) to the n-th power lead to substantial improvements. The motivation for this technique is to alleviate noise arising from the use of unfiltered head words for the construction. The reweighted selectional preference vector R b (r) is defined as:", "cite_spans": [ { "start": 3, "end": 22, "text": "Erk and Pad\u00f3 (2008)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "R b (r) = v n 1 , . . . , v n m for R b (r) = v 1 , . . . , v m (6)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "where we write v 1 , . . . , v m for the sequence of values that make up a vector R b (r). Inverse selectional preferences R \u22121 b (r) of nouns are defined analogously, by computing the centroid of the verbs seen as governors of the noun in relation r.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "In this paper, we test reweighting parameters of n between 0.5 and 30. Generally, small ns will decrease the influence of the selectional preference vector. The result can be thought of as a \"word type vector modified by context expectations\", while large ns increase the role of context, until we arrive at a \"contextual expectation vector modified by the word type vector\". 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "Vector combination. We test three vector combination functions , which have different interpretations in vector space. The simplest one is componentwise addition, abbreviated as add, i.e., simple vector addition. 4 With addition, context dimensions receive a high count whenever either of the two vectors has a high co-occurrence count for the context.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "Next, we test component-wise multiplication (mult). This operation is more difficult to interpret in terms of vector space, since it does not correspond to the standard inner or outer vector products. The most straightforward interpretation is to reinterpret the second vector as a diagonal matrix, i.e., as a linear transformation of the first vector. Large entries in the second vector increase the weight of the corresponding contexts; small entries decrease it. Mitchell and Lapata (2008) found this method to yield the best results.", "cite_spans": [ { "start": 466, "end": 492, "text": "Mitchell and Lapata (2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "The third vector combination function we consider is component-wise minimum (min). This combination function results in a vector with high counts only for contexts which co-occur frequently with both input vectors and can thus be understood as an intersection between the two context sets. Since the entries of two vectors need to be on the same order to magnitude for this method to yield meaningful results, we normalize vectors before the combination for min.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "Assessing models of token meaning. Given a transitive verb v with subject a and direct object b, we test three variants of computing a token vector for v. The first two involve only one combination step. In the subj condition, v's type vector is combined with the inverse subject preference vector of a. In the obj condition, v's type vector is combined with the inverse object preference vector of b. The third variant is the recursive application of the SVS combination procedure described in Section 2 (condition both). Specifically, we combine v's type vector with both a's inverse subject preference and with b's inverse object preference to obtain a \"richer\" token vector.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "In all three cases, the resulting token vector is compared to the type vector of the paraphrase (in Experiment 1) or the semantically related word (in Experiment 2). We use Cosine Similarity, a standard choice as vector space similarity measure.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "In our 2008 paper, we tested the LexSub data only with the parameters that showed best results on the Mitchell and Lapata data: vector combination using componentwise multiplication (mult), and the computation of (inverse) selectional preference vectors with high powers of n = 20 or n = 30. However, there were indications that the two datasets showed fundamental differences. In particular, the Mitchell and Lapata data could only be modeled using a PMI-transformed vector space, while the LexSub data could only be modeled using raw cooccurrence count vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: The impact of parameters", "sec_num": "4.1" }, { "text": "Another one of our findings that warrants further inquiry stems from our comparison of different context choices (verb plus subject, verb plus object, noun plus embedding verb). We found that subjects are better disambiguators than objects. This seems counterintuitive both on theoretical and empirical grounds. Theoretically,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 1: The impact of parameters", "sec_num": "4.1" }, { "text": "By asking people who work there, I have since determined that he didn't. (# 2002) be employed 4; labour 1", "cite_spans": [ { "start": 73, "end": 81, "text": "(# 2002)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "Remember how hard your ancestors worked. (# 2005) toil 4; labour 3; task 1 Figure 3 : Lexical substitution example items for \"work\" the notion of verb phrase has been motivated, among other things, with the claim that direct objects contribute more to a verb's disambiguation than subjects (Levin and Rappaport Hovav, 2005) . Empirically, subjects are known to be realized more often as pronouns than objects, which makes their vector representations less semantically specific. However, we used two different datasets -the subject results on a set of intransitive verbs, and the object results on a set of transitive verbs, so the results are not comparable.", "cite_spans": [ { "start": 41, "end": 49, "text": "(# 2005)", "ref_id": null }, { "start": 290, "end": 323, "text": "(Levin and Rappaport Hovav, 2005)", "ref_id": null } ], "ref_spans": [ { "start": 75, "end": 83, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "In this experiment, we construct a new, more controlled dataset from the Lexical Substitution corpus to systematically assess the importance of the three main parameters: the relation used for disambiguation, the combination function, and the reweighting parameter.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "Construction of the LEXSUB-PARA dataset. The original Lexical Substitution corpus, constructed for the SemEval-1 lexical substitution task (McCarthy and Navigli, 2007) , consists of 10 instances each of 200 target words in sentential contexts, drawn from a large internet corpus (Sharoff, 2006) . Contextually appropriate paraphrases for each instance of each target word were elicited from up to 6 participants. Figure 3 shows two instances for the verb to work. The frequency distribution over paraphrases can be understood as a characterization of the target word's meaning in each context.", "cite_spans": [ { "start": 139, "end": 167, "text": "(McCarthy and Navigli, 2007)", "ref_id": "BIBREF14" }, { "start": 279, "end": 294, "text": "(Sharoff, 2006)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 413, "end": 421, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "For the current paper, we constructed a new subset of LexSub we call LEXSUB-PARA by parsing LexSub with Minipar (Lin, 1993) and extracting all 177 sentences with transitive verbs that had overtly realized subjects and objects, regardless of voice. We did not manually verify the correctness of the parses, but discarded 17 sentences where we were not able to compute inverse selectional preferences for the subject or object head word (these were mostly rare proper names). This left 160 transitive instances of 42 verbs.", "cite_spans": [ { "start": 112, "end": 123, "text": "(Lin, 1993)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "Evaluation For evaluation, we use a variant of the Se-mEval \"out of ten\" (OOT) evaluation metrics defined by McCarthy and Navigli (2007) . They developed two metrics, OOT Precision and Recall, which compare where a predicted set of appropriate paraphrases must be evaluated against a gold standard set. Their metrics are called \"out of ten\" because they are measure the accuracy of the first ten paraphrases predicted by the system. Since they allow systems to abstain from predictions for any number of tokens, their two variants average this accuracy (a), over the tokens with a prediction (OOT Precision), and (b), over all tokens (OOT Recall). Since our system produces predictions for all tokens, OOT Precision and Recall become identical.", "cite_spans": [ { "start": 109, "end": 136, "text": "McCarthy and Navigli (2007)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "Formally, let G i be the gold paraphrases for occurrence i, and let f (s, i) be the frequency with which s has been named as paraphrase for i. Let M i be the ten paraphrase candidates top-ranked by the SVS model for i. We write out-of-ten accuracy (OOT) as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "OOT = 1/|I| i s\u2208Mi\u2229Gi f (s, i) s\u2208Gi f (s, i)", "eq_num": "(7)" } ], "section": "Substitutes", "sec_num": null }, { "text": "We compute two baselines. The first one is random baseline that guesses whether paraphrases are appropriate. The second baseline uses the original type vector of the target verb without any combination, i.e., its \"out of context meaning\", as representation for the token.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "Results. Table 1 shows the results on the LEXSUB-PARA dataset. Recall that the task is to decide the appropriateness of paraphrases for verb instances, disambiguated by the inverse selectional preferences of their subjects (subj), their objects (obj), and both. The random baseline attains an OOT accuracy of 53.7, and the type vector of the target vector performs at 57.1. SVS is able to outperform both baselines for all values of the reweighting parameter n <2, and we find the best results for the lowest value, n = 0.5. As for the influence of the vector combination function, the best result is yielded by min (OOT=62.3), followed by add (OOT=61.7), while mult shows generally worse results (OOT=60.3). For both add and mult, using only the subject as context only is optimal. The overall best result, using min, is seen for both; however, the improvement over subj is very small.", "cite_spans": [], "ref_spans": [ { "start": 9, "end": 16, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "In the model mult-both-20, where target vectors were multiplied with two very large expectation vectors, almost all instances failed due to overflow errors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "Discussion. Our results indicate that our parameter optimization strategy in Erk and Pad\u00f3 (2008) was in fact flawed. The parameters that were best for the Mitchell and Lapata (2008) data (mult, n = 20) are suboptimal for LEXSUB-PARA data. 5 The good results for low val-ues of n indicate that good discrimination between valid and invalid paraphrases can be obtained by relatively small modifications of the target vector in the direction indicated by the context. Surprisingly, we still find that the results in the subj condition are almost always better than those in the obj condition, even though the dataset consists only of transitive verbs, where we would have expected the inverse result. We have two partial explanations. First, we find that pronouns, which occur frequently in subject position (I, he), are still informative enough to distinguish \"animate\" from \"inanimate\" paraphrases of verbs such as touch. Second, we see a higher number of Minipar errors in for object positions than for subject positions, and consequently more data both for object fillers and for object selectional preferences.", "cite_spans": [ { "start": 77, "end": 96, "text": "Erk and Pad\u00f3 (2008)", "ref_id": "BIBREF2" }, { "start": 155, "end": 181, "text": "Mitchell and Lapata (2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "The overall best result was yielded by a condition that used both (subject plus object) for disambiguation, using the recursive modification from Eq. (4). While we see this as a promising result, the difference to the secondbest result is very small, in almost all other conditions the performance of both is close to the average of obj and subj and thus a suboptimal choice.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Substitutes", "sec_num": null }, { "text": "With a size of 2,000 sentences, even the complete LexSub dataset is tiny in comparison to many other resources in NLP. Limiting attention to successfully parsed transitive instances results in an even smaller dataset on which it is difficult to distinguish noise from genuine differences between models. This is a large problem for the use of paraphrase appropriateness as evaluation task for models of word meaning in context. In consequence, the automatic creation of larger datasets is an important task. While unsupervised methods for paraphrase induction are becoming available (e.g., Callison-Burch (2008)), they are still so noisy that the created datasets cannot serve as gold standards. However, there is an alternative strategy: there is a considerable amount of data in different languages annotated with categorical word sense, created (e.g.) for Word Sense Disambiguation exercises such as Senseval. We suggest to convert these data for use in a task similar to paraphrase assessment, interpreting available information about the word sense as pseudo-paraphrases. Of course, the caveat is that these pseudo-paraphrases may behave differently than genuine paraphrases. To investigate this issue, we repeat Experiment 1 on this dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Creating larger datasets with pseudo-paraphrases", "sec_num": "4.2" }, { "text": "Construction of the SEMCOR-PARA dataset The SemCor corpus is a subset of the Brown corpus that contains 23,346 lemmas annotated with senses according to WordNet 1.6. Fortunately, WordNet provides a rich characterization of word senses. This allows us to use the WordNet synonyms of a given word sense as pseudo-paraphrases. Since it can be the case that the target word is the only word in a synset, we also (Mihalcea and Chklovski, 2003) , an indicator that they are usually close enough in meaning to function as pseudo-paraphrases. Again, we parsed the corpus with Minipar and identified all sense-tagged instances of the verbs from LEXSUB-PARA, to keep the two corpora as comparable as possible. For each instance w i of word w, we collected all synonyms and direct hypernyms of the synset as the set of appropriate paraphrases. The list of synonyms and direct hypernyms of all other senses of w, whether they occur in SemCor or not, were considered inappropriate paraphrases for the instance w i . This method does not provide us with frequencies for the pseudo-paraphrases; we thus assumed a uniform frequency of 1. This does not do away with the gradedness of the meaning representation, though, since each token is still associated with a set of appropriate paraphrases.", "cite_spans": [ { "start": 408, "end": 438, "text": "(Mihalcea and Chklovski, 2003)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Creating larger datasets with pseudo-paraphrases", "sec_num": "4.2" }, { "text": "Out of 2242 transitive verb instances, we further removed 153 since we could not compute selectional preferences for at least one of the fillers. 484 instances were removed because WordNet did not list any verbal paraphrases for the annotated synset or its direct hypernym. This resulted in 1605 instances for 40 verbs, a dataset an order of magnitude larger than LEXSUB-PARA. (See Section 4.3 for an example verb with paraphrases.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Creating larger datasets with pseudo-paraphrases", "sec_num": "4.2" }, { "text": "Results and Discussion. We again use the OOT accuracy measure. The results for paraphrase assessment on SEMCOR-PARA are shown in Table 2 . The numbers are substantially lower than for LEXSUB-PARA. This is first and foremost a consequence of the higher \"polysemy\" of the pseudo-paraphrases. In LEXSUB-PARA, the average numbers of possible paraphrases per target word is 20; in SEMCOR-PARA, 54. This is to be expected and also reflected in the much lower random baseline (19.6% OOT). However, we also observe that the reduction in error rate over the baseline is considerably lower for SEMCOR-PARA than for LEXSUB-PARA (10% vs. 20% reduction).", "cite_spans": [], "ref_spans": [ { "start": 129, "end": 136, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Experiment 2: Creating larger datasets with pseudo-paraphrases", "sec_num": "4.2" }, { "text": "Among the parameters of the model, we find the largest impact for the reweighting parameter. The best results occur in the middle range(n = 2 and n = 5), with both lower and higher weights yielding considerably lower scores. Apparently, it is more difficult to strike the right balance between the target and the expectations on this dataset. This is also mirrored in the smaller improvement of the target type vector baseline over the random baseline. As for vector combination functions, we find the best results for the more \"intersection\"-like mult and min combinations, with somewhat lower results for add; however, the differences are rather small. Finally, combination with obj works better than combination with subj. At least among the best results, both is able to improve over the use of either individual relation. The best result uses mult-both, with an OOT accuracy of 25.6.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment 2: Creating larger datasets with pseudo-paraphrases", "sec_num": "4.2" }, { "text": "In our two experiments, we have found systematic relationships between the SVS model parameters and their performance within the LEXSUB-PARA and SEMCOR-PARA datasets. Unfortunately, few of the parameter settings we found to work well appear to generalize across the two datasets; neither do they correspond to the optimal parameter values we established for the Mitchell and Lapata dataset in our 2008 paper. Variables that vary particularly strikingly are the reweighting parameter and the performance of different relations. To better understand these differences, we perform a further validation analysis that attempts to link model performance to a variable that (a) behaves consistently across the two datasets used in this paper and (b) sheds light onto the patterns we have observed for the parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further analysis", "sec_num": "4.3" }, { "text": "The quantity we will use for this purpose is the average discriminativity of the model. We define discriminativity as the degree to which the token vector computed by the model is on average more similar to the valid than to the invalid paraphrases. For a paraphrase ordering task such as the one we are considering, we want this quantity to be as large as possible; very small quantities indicate that the model is basically \"guessing\" an order. Figure 4 plots disciminativity against model performance. As can be expected, it is indeed a very strong correlation between discriminativity and OOT accuracy across all models. A Pearson's correlation test confirms that the correlation is highly significant for both datasets (LEXSUB-PARA: r=0.65, p < 0.0001; SEMCOR-PARA: r=0.76, p < 0.0001).", "cite_spans": [], "ref_spans": [ { "start": 447, "end": 455, "text": "Figure 4", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Further analysis", "sec_num": "4.3" }, { "text": "Next, we considered the relationship between the mean discriminativity for different combinations and reweighting values n. Figure 5 shows the resulting plots, which reveal two main differences between the datasets. The first one is the influence of the reweighting parameter. For LEXSUB-PARA, the highest discriminativity is found for small values of n, with decreasing values for higher parameter values. In contrast, SEMCOR-PARA shows the highest discriminativity for middle values of n (on the order of 5-10), with lowest values on either side. The second difference is the relative discriminativity of obj and subj. On LEXSUB-PARA, the subj predictions are more discriminative than obj predictions for all values of n. On SEMCOR-PARA, this picture is reversed, with more discriminative obj predictions for the best (and thus relevant) values of n.", "cite_spans": [], "ref_spans": [ { "start": 124, "end": 132, "text": "Figure 5", "ref_id": "FIGREF5" } ], "eq_spans": [], "section": "Further analysis", "sec_num": "4.3" }, { "text": "We interpret these patterns, which fit the observed OOT accuracy numbers well, as additional evidence that the variations we see between the datasets are not noise or artifacts of the setup, but arise due to the different makeup of the two datasets. This ties in with our intuitions about the differences between human-generated paraphrases and WordNet \"pseudo-paraphrases\". Compare the following paraphrase lists: The SEMCOR-PARA list contains a larger number of unspecific pseudo-paraphrases such as change, push, send, which stem from direct WordNet hypernyms of the more specific dismiss senses. Presumably, these terms are assigned rather general vectors which the SVS finds difficult to rule out as paraphrases. This lowers the discriminativity of the models, in particular for subj, and results in the smaller relative improvement over the baseline we observe for SEMCOR-PARA. This suggests that the usability of word sense-derived datasets in evaluations could be improved by taking depth in the WordNet hierarchy into account when including direct hypernyms among the pseudo-paraphrases.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further analysis", "sec_num": "4.3" }, { "text": "dismiss (LexSub): banish,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Further analysis", "sec_num": "4.3" }, { "text": "In this paper, we have explored the parameter space for the computation of vector-based representations of token meaning with the SVS model. Our evaluation scenario was paraphrase assessment. To systematically assess the impact of parameter choice, we created two new controlled datasets. The first one, the LEXSUB-PARA dataset, is a small subset of the Lexical Substitution corpus (McCarthy and Navigli, 2007) that was specifically created for this task. The second dataset, SEMCOR-PARA, which is considerably larger, consists in instances from the SemCor corpus whose WordNet annotation was automatically converted into \"pseudo-paraphrase\" annotation. 6 We found a small number of regularities that hold for both datasets: namely, that the reweighting parameter is the most important choice for a SVS model, followed by the relation used as context, while the influence of the vector combination function is comparatively small. Unfortunately, the actual settings of these parameters appeared not to generalize well from one dataset to the other. We have collected evidence that these divergences are not due to noise, but to genuine differences in the datasets. We describe an auxiliary quantity, discriminativity, that measures the ability of the model's predictions to distinguish between valid and invalid paraphrases.", "cite_spans": [ { "start": 382, "end": 410, "text": "(McCarthy and Navigli, 2007)", "ref_id": "BIBREF14" }, { "start": 654, "end": 655, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "The consequence we draw from this study is that it is surprisingly difficult to establish generalizable \"best practice\" parameter setting for SVS. Good parameter values appear to be sensitive to the properties of datasets. For example, we have attributed the observation that subjects are more informative on LEXSUB-PARA, while objects work better on SEMCOR-PARA, to differences in the set of paraphrase competitors. In this regard, the conversion of the WSD corpus can be considered a partial success. We have constructed the largest existing paraphrase assessment corpus. However, the use of WordNet information to create paraphrases results in a very difficult corpus. We will investigate methods that exclude overly general hypernyms of the target words as paraphrases to alleviate the problems we see currently.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Discriminativity further suggests that paraphrase assessment can be improved by selectional preference representations that are trained to maximize the distance between valid and invalid paraphrases. Such a representation could be provided by discriminative for-mulations (Bergsma et al., 2008) , or by exemplar-based models that are able to deal better with the ambiguity present in the preferences of very general words.", "cite_spans": [ { "start": 272, "end": 294, "text": "(Bergsma et al., 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "Another important topic for further research is the computation of token vectors that incorporate more than one context word. The current results we obtain for \"both \" are promising but limited; it appears that the successful integration of multiple context words requires strategies that go beyond simplistic addition or intersection of observed contexts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "5" }, { "text": "We used the minimal context specification and plain weight of the DependencyVectors software package.3 For the component-wise minimum combination (see below), where we normalize the vectors before the combination, the reweighting has a different effect. It shifts most of the mass onto the largest-value dimensions and sets smaller dimensions to values close to zero.4 Since we subsequently focus on cosine similarity, which is length-invariant, vector addition can also be interpreted as centroid computation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We assume that our results hold for thePad\u00f3 & Erk (2008) lexical substitution dataset as well, due to its similar nature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Both datasets can be obtained from the authors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Discriminative learning of selectional preference from unlabeled text", "authors": [ { "first": "S", "middle": [], "last": "Bergsma", "suffix": "" }, { "first": "D", "middle": [], "last": "Lin", "suffix": "" }, { "first": "R", "middle": [], "last": "Goebel", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "59--68", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Bergsma, D. Lin, and R. Goebel. 2008. Discrimina- tive learning of selectional preference from unlabeled text. In Proceedings of EMNLP, pages 59-68.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Syntactic constraints on paraphrases extracted from parallel corpora", "authors": [ { "first": "C", "middle": [], "last": "Callison-Burch", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "196--205", "other_ids": {}, "num": null, "urls": [], "raw_text": "C. Callison-Burch. 2008. Syntactic constraints on para- phrases extracted from parallel corpora. In Proceed- ings of EMNLP, pages 196-205.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "A structured vector space model for word meaning in context", "authors": [ { "first": "K", "middle": [], "last": "Erk", "suffix": "" }, { "first": "S", "middle": [], "last": "Pad\u00f3", "suffix": "" } ], "year": 2008, "venue": "Proceedings of EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Erk and S. Pad\u00f3. 2008. A structured vector space model for word meaning in context. In Proceedings of EMNLP.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A simple, similarity-based model for selectional preferences", "authors": [ { "first": "K", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2007, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "216--223", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Erk. 2007. A simple, similarity-based model for se- lectional preferences. In Proceedings of ACL, pages 216-223.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Automatic labeling of semantic roles", "authors": [ { "first": "D", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2002, "venue": "Computational Linguistics", "volume": "28", "issue": "3", "pages": "245--288", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Gildea and D. Jurafsky. 2002. Automatic label- ing of semantic roles. Computational Linguistics, 28(3):245-288.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Do word meanings exist?", "authors": [ { "first": "P", "middle": [], "last": "Hanks", "suffix": "" } ], "year": 2000, "venue": "Computers and the Humanities", "volume": "34", "issue": "1-2", "pages": "205--215", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Hanks. 2000. Do word meanings exist? Computers and the Humanities, 34(1-2):205-215.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Structural ambiguity and lexical relations", "authors": [ { "first": "D", "middle": [], "last": "Hindle", "suffix": "" }, { "first": "M", "middle": [], "last": "Rooth", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "1", "pages": "103--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Hindle and M. Rooth. 1993. Structural ambigu- ity and lexical relations. Computational Linguistics, 19(1):103-120.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Framework and results for English Senseval", "authors": [ { "first": "A", "middle": [], "last": "Kilgarriff", "suffix": "" }, { "first": "J", "middle": [], "last": "Rosenzweig", "suffix": "" } ], "year": 2000, "venue": "Computers and the Humanities", "volume": "34", "issue": "", "pages": "1--2", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Kilgarriff and J. Rosenzweig. 2000. Framework and results for English Senseval. Computers and the Humanities, 34(1-2).", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "1997. I don't believe in word senses", "authors": [ { "first": "A", "middle": [], "last": "Kilgarriff", "suffix": "" } ], "year": null, "venue": "Computers and the Humanities", "volume": "31", "issue": "2", "pages": "91--113", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Kilgarriff. 1997. I don't believe in word senses. Computers and the Humanities, 31(2):91-113.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A solution to Platos problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge", "authors": [ { "first": "T", "middle": [], "last": "Landauer", "suffix": "" }, { "first": "S", "middle": [], "last": "Dumais", "suffix": "" } ], "year": 1997, "venue": "Psychological Review", "volume": "104", "issue": "2", "pages": "211--240", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Landauer and S. Dumais. 1997. A solution to Platos problem: the latent semantic analysis theory of ac- quisition, induction, and representation of knowledge. Psychological Review, 104(2):211-240.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Principle-based parsing without overgeneration", "authors": [ { "first": "D", "middle": [], "last": "Lin", "suffix": "" } ], "year": 1993, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "112--120", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Lin. 1993. Principle-based parsing without overgen- eration. In Proceedings of ACL, pages 112-120.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Producing high-dimensional semantic spaces from lexical cooccurrence", "authors": [ { "first": "K", "middle": [], "last": "Lund", "suffix": "" }, { "first": "C", "middle": [], "last": "Burgess", "suffix": "" } ], "year": 1996, "venue": "Behavior Research Methods, Instruments, and Computers", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Lund and C. Burgess. 1996. Producing high-dimensional semantic spaces from lexical co- occurrence. Behavior Research Methods, Instru- ments, and Computers, 28.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "SemEval-2007 Task 10: English Lexical Substitution Task", "authors": [ { "first": "D", "middle": [], "last": "Mccarthy", "suffix": "" }, { "first": "R", "middle": [], "last": "Navigli", "suffix": "" } ], "year": 2007, "venue": "Proceedings of SemEval", "volume": "", "issue": "", "pages": "48--53", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. McCarthy and R. Navigli. 2007. SemEval-2007 Task 10: English Lexical Substitution Task. In Pro- ceedings of SemEval, pages 48-53.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Relating WordNet senses for word sense disambiguation", "authors": [ { "first": "D", "middle": [], "last": "Mccarthy", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the ACL Workshop on Making Sense of Sense", "volume": "", "issue": "", "pages": "17--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. McCarthy. 2006. Relating WordNet senses for word sense disambiguation. In Proceedings of the ACL Workshop on Making Sense of Sense, pages 17-24.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "A distributional model of semantic context effects in lexical processing", "authors": [ { "first": "S", "middle": [], "last": "Mcdonald", "suffix": "" }, { "first": "C", "middle": [], "last": "Brew", "suffix": "" } ], "year": 2004, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "17--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. McDonald and C. Brew. 2004. A distributional model of semantic context effects in lexical process- ing. In Proceedings of ACL, pages 17-24.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Modeling the influence of thematic fit (and other constraints) in on-line sentence comprehension", "authors": [ { "first": "K", "middle": [], "last": "Mcrae", "suffix": "" }, { "first": "M", "middle": [], "last": "Spivey-Knowlton", "suffix": "" }, { "first": "M", "middle": [], "last": "Tanenhaus", "suffix": "" } ], "year": 1998, "venue": "Journal of Memory and Language", "volume": "38", "issue": "", "pages": "283--312", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. McRae, M. Spivey-Knowlton, and M. Tanenhaus. 1998. Modeling the influence of thematic fit (and other constraints) in on-line sentence comprehension. Journal of Memory and Language, 38:283-312.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Open Mind Word Expert: Creating large annotated data collections with web users' help", "authors": [ { "first": "R", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "T", "middle": [], "last": "Chklovski", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the EACL 2003 Workshop on Linguistically Annotated Corpora (LINC 2003)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Mihalcea and T. Chklovski. 2003. Open Mind Word Expert: Creating large annotated data collec- tions with web users' help. In Proceedings of the EACL 2003 Workshop on Linguistically Annotated Corpora (LINC 2003), Budapest, Hungary.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Vector-based models of semantic composition", "authors": [ { "first": "J", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "M", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2008, "venue": "Proceedings of ACL", "volume": "", "issue": "", "pages": "236--244", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Mitchell and M. Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL, pages 236-244.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "A Bayesian model predicts human parse preference and reading time in sentence processing", "authors": [ { "first": "S", "middle": [], "last": "Narayanan", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2002, "venue": "Proceedings of NIPS", "volume": "", "issue": "", "pages": "59--65", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Narayanan and D. Jurafsky. 2002. A Bayesian model predicts human parse preference and reading time in sentence processing. In Proceedings of NIPS, pages 59-65.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Flexible, corpusbased modelling of human plausibility judgements", "authors": [ { "first": "S", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "U", "middle": [], "last": "Pad\u00f3", "suffix": "" }, { "first": "K", "middle": [], "last": "Erk", "suffix": "" } ], "year": 2007, "venue": "Proceedings of EMNLP/CoNLL", "volume": "", "issue": "", "pages": "400--409", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Pad\u00f3, U. Pad\u00f3, and K. Erk. 2007. Flexible, corpus- based modelling of human plausibility judgements. In Proceedings of EMNLP/CoNLL, pages 400-409.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Making fine-grained and coarse-grained sense distinctions, both manually and automatically", "authors": [ { "first": "M", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "H", "middle": [], "last": "Dang", "suffix": "" }, { "first": "C", "middle": [], "last": "Fellbaum", "suffix": "" } ], "year": 2007, "venue": "Journal of Natural Language Engineering", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Palmer, H. Dang, and C. Fellbaum. 2007. Making fine-grained and coarse-grained sense distinctions, both manually and automatically. Journal of Natural Language Engineering. To appear.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Selectional constraints: An information-theoretic model and its computational realization", "authors": [ { "first": "P", "middle": [], "last": "Resnik", "suffix": "" } ], "year": 1996, "venue": "Cognition", "volume": "61", "issue": "", "pages": "127--159", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Resnik. 1996. Selectional constraints: An information-theoretic model and its computational realization. Cognition, 61:127-159.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Automatic word sense discrimination", "authors": [ { "first": "H", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 1998, "venue": "Computational Linguistics", "volume": "24", "issue": "1", "pages": "97--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "H. Sch\u00fctze. 1998. Automatic word sense discrimina- tion. Computational Linguistics, 24(1):97-124.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Open-source corpora: Using the net to fish for linguistic data", "authors": [ { "first": "Serge", "middle": [], "last": "Sharoff", "suffix": "" } ], "year": 2006, "venue": "International Journal of Corpus Linguistics", "volume": "11", "issue": "4", "pages": "435--462", "other_ids": {}, "num": null, "urls": [], "raw_text": "Serge Sharoff. 2006. Open-source corpora: Using the net to fish for linguistic data. International Journal of Corpus Linguistics, 11(4):435-462.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Preference semantics", "authors": [ { "first": "Y", "middle": [], "last": "Wilks", "suffix": "" } ], "year": 1975, "venue": "Formal Semantics of Natural Language. CUP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Y. Wilks. 1975. Preference semantics. In Formal Semantics of Natural Language. CUP.", "links": null } }, "ref_entries": { "FIGREF0": { "type_str": "figure", "uris": null, "text": "Structured Vector Space representations for noun ball and verb catch : Each box represents one vector (lexical information or expectations) represent the meaning of a lemma w as a triple", "num": null }, "FIGREF1": { "type_str": "figure", "uris": null, "text": "be the representations of the two words, and let r \u2208 R be the relation linking a to b. Then, the meaning of a and b in this context is defined as a pair of structured vector triples: m(a r \u2192 b) is the meaning of a with b as its r-argument, and m(b r \u22121 \u2192 a) the meaning of b as the r-argument of a:", "num": null }, "FIGREF3": { "type_str": "figure", "uris": null, "text": "Combining predicate and argument via relation-specific semantic expectations syntactic context of a word in a dependency tree often consists of more than one word. It seems intuitively plausible that disambiguation should profit from more context information. Thus, we extend SVS with recursive application. Let a stand in relation r to b. As defined above, the result of combining m(a) and m(b) by relation r are two structured vector triples m(a r \u2192 b)", "num": null }, "FIGREF4": { "type_str": "figure", "uris": null, "text": "Scatterplot of \"out of ten\" accuracy against model discriminativity between valid and invalid paraphrases. Left: LEXSUB-PARA, right: SEMCOR-PARA.", "num": null }, "FIGREF5": { "type_str": "figure", "uris": null, "text": "Average amount to which predictions are more similar to valid than to invalid paraphrases, for different reweighting values. Left: LEXSUB-PARA, right: SEMCOR-PARA.", "num": null }, "TABREF0": { "html": null, "content": "
0.51251020
addobj61.5 59.7 58.9 56.1 56.0 55.7
addsubj 61.7 61.7 59.5 58.4 57.3 57.0
addboth 61.3 60.0 60.2 57.7 57.1 56.7
mult obj59.8 Pr
minobj60.2 60.0 59.5 57.3 55.7 55.8
minsubj 62.2 60.5 59.1 58.5 57.8 57.0
minboth 62.3 60.2 59.8 57.3 55.8 55.1
", "type_str": "table", "text": "59.7 57.8 55.7 55.7 55.4 mult subj 60.3 59.7 59.3 57.3 57.7 56.7 mult both 59.9 58.8 57.1 55.8 55.3 <1", "num": null }, "TABREF1": { "html": null, "content": "", "type_str": "table", "text": "OOT accuracy on the LEXSUB-PARA dataset across models and reweighting values (best results for each model boldfaced). Random baseline: 53.7. Target type vector baseline: 57.1.", "num": null }, "TABREF3": { "html": null, "content": "
: OOT accuracy on the SEMCOR-PARA dataset
across models and reweighting values (best results for
each line boldfaced). Random baseline: 19.6. Target
type vector baseline: 20.8
need to add direct hypernyms. Direct hypernyms have
been used in annotation tasks to characterize WordNet
senses
", "type_str": "table", "text": "", "num": null } } } }