Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W09-0215",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:37:27.834416Z"
},
"title": "Context-theoretic Semantics for Natural Language: an Overview",
"authors": [
{
"first": "Daoud",
"middle": [],
"last": "Clarke",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Sussex Falmer",
"location": {
"settlement": "Brighton",
"country": "UK"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present the context-theoretic framework, which provides a set of rules for the nature of composition of meaning based on the philosophy of meaning as context. Principally, in the framework the composition of the meaning of words can be represented as multiplication of their representative vectors, where multiplication is distributive with respect to the vector space. We discuss the applicability of the framework to a range of techniques in natural language processing, including subsequence matching, the lexical entailment model of Dagan et al. (2005), vector-based representations of taxonomies, statistical parsing and the representation of uncertainty in logical semantics.",
"pdf_parse": {
"paper_id": "W09-0215",
"_pdf_hash": "",
"abstract": [
{
"text": "We present the context-theoretic framework, which provides a set of rules for the nature of composition of meaning based on the philosophy of meaning as context. Principally, in the framework the composition of the meaning of words can be represented as multiplication of their representative vectors, where multiplication is distributive with respect to the vector space. We discuss the applicability of the framework to a range of techniques in natural language processing, including subsequence matching, the lexical entailment model of Dagan et al. (2005), vector-based representations of taxonomies, statistical parsing and the representation of uncertainty in logical semantics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Techniques such as latent semantic analysis (Deerwester et al., 1990 ) and its variants have been very successful in representing the meanings of words as vectors, yet there is currently no theory of natural language semantics that explains how we should compose these representations: what should the representation of a phrase be, given the representation of the words in the phrase? In this paper we present such a theory, which is based on the philosophy of meaning as context, as epitomised by the famous sayings of Wittgenstein (1953) , \"Meaning just is use\" and Firth (1957) , \"You shall know a word by the company it keeps\". For the sake of brevity we shall present only a summary of our research, which is described in full in (Clarke, 2007) , and we give a simplified version of the framework, which nevertheless suffices for the examples which follow.",
"cite_spans": [
{
"start": 44,
"end": 68,
"text": "(Deerwester et al., 1990",
"ref_id": "BIBREF6"
},
{
"start": 521,
"end": 540,
"text": "Wittgenstein (1953)",
"ref_id": "BIBREF20"
},
{
"start": 575,
"end": 581,
"text": "(1957)",
"ref_id": null
},
{
"start": 736,
"end": 750,
"text": "(Clarke, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We believe that the development of theories that can take vector representations of meaning beyond the word level, to the phrasal and sentence levels and beyond are essential for vector based semantics to truly compete with logical semantics, both in their academic standing and in application to real problems in natural language processing. Moreover the time is ripe for such a theory: never has there been such an abundance of immediately available textual data (in the form of the worldwide web) or cheap computing power to enable vector-based representations of meaning to be obtained. The need to organise and understand the new abundance of data makes these techniques all the more attractive since meanings are determined automatically and are thus more robust in comparison to hand-built representations of meaning. A guiding theory of vector based semantics would undoubtedly be invaluable in the application of these representations to problems in natural language processing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The context-theoretic framework does not provide a formula for how to compose meaning; rather it provides mathematical guidelines for theories of meaning. It describes the nature of the vector space in which meanings live, gives some restrictions on how meanings compose, and provides us with a measure of the degree of entailment between strings for any implementation of the framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The remainder of the paper is structured as follows: in Section 2 we present the framework; in Section 3 we present applications of the framework:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We describe subsequence matching (Section 3.1) and the lexical entailment model of (Dagan et al., 2005) (Section 3.2), both of which have been applied to the task of recognising textual entailment.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show how a vector based representation of a taxonomy incorporating probabilistic information about word meanings can be con-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "d 1 d 2 d 3 d 4 d 5 d 6 d 1 d 2 d 3 d 4 d 5 d 6 d 1 d 2 d 3 d 4 d 5 d 6 orange fruit orange \u2227 fruit Figure 1: Vector representations of two terms in a space L 1 (S) where S = {d 1 , d 2 , d 3 , d 4 , d 5 , d 6 }",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "and their vector lattice meet (the darker shaded area).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "structed in Section 3.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We show how syntax can be represented within the framework in Section 3.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 We summarise our approach to representing uncertainty in logical semantics in Section 3.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The context-theoretic framework is based on the idea that the vector representation of the meaning of a word is derived from the contexts in which it occurs. However it extends this idea to strings of any length: we assume there is some set S containing all the possible contexts associated with any string. A context theory is an implementation of the context-theoretic framework; a key requirement for a context theory is a mapping from strings to vectors formed from the set of contexts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "In vector based techniques, the set of contexts may be the set of possible dependency relations between words, or the set of documents in which strings may occur; in context-theoretic semantics however, the set of \"contexts\" can be any set. We continue to refer to it as a set of contexts since the intuition and philosophy which forms the basis for the framework derives from this idea; in practice the set may even consist of logical sentences describing the meanings of strings in model-theoretic terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "An important aspect of vector-based techniques is measuring the frequency of occurrence of strings in each context. We model this in a general way as follows: let A be a set consisting of the words of the language under consideration. The first requirement of a context theory is a mapping x \u2192x from a string x \u2208 A * to a vectorx \u2208 L 1 (S) + , where L 1 (S) means the set of all functions from S to the real numbers R which are finite under the L 1 norm,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "u 1 = s\u2208S |u(s)|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "and L 1 (S) + restricts this to functions to the nonnegative real numbers, R + ; these functions are called the positive elements of the vector space L 1 (S). The requirement that the L 1 norm is finite, and that the map is only to positive elements reflects the fact that the vectors are intended to represent an estimate of relative frequency distributions of the strings over the contexts, since a frequency distribution will always satisfy these requirements. Note also that the l 1 norm of the context vector of a string is simply the sum of all its components and is thus proportional to its probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "The set of functions L 1 (S) is a vector space under the point-wise operations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "(\u03b1u)(s) = \u03b1u(s) (u + v)(s) = u(s) + v(s) for u, v \u2208 L 1 (S) and \u03b1 \u2208 R, but it is also a lattice under the operations (u \u2227 v)(s) = min(u(s), v(s)) (u \u2228 v)(s) = max(u(s), v(s)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "In fact it is a vector lattice or Riesz space (Aliprantis and Burkinshaw, 1985) since it satisfies the following relationships",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "if u \u2264 v then \u03b1u \u2264 \u03b1v if u \u2264 v then u + w \u2264 v + w,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "where \u03b1 \u2208 R + and \u2264 is the partial ordering associated with the lattice operations, defined by u \u2264",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "v if u \u2227 v = u.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "Together with the l 1 norm, the vector lattice defines an Abstract Lebesgue space (Abramovich and Aliprantis, 2002) a vector space incorporating all the properties of a measure space, and thus can also be thought of as defining a probability space, where \u2228 and \u2227 correspond to the union and intersection of events in the \u03c3 algebra, and the norm corresponds to the (un-normalised) probability.",
"cite_spans": [
{
"start": 82,
"end": 115,
"text": "(Abramovich and Aliprantis, 2002)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Context-theoretic Framework",
"sec_num": "2"
},
{
"text": "The vector lattice nature of the space under consideration is important in the context-theoretic framework since it is used to define a degree of entailment between strings. Our notion of entailment is based on the concept of distributional generality (Weeds et al., 2004) , a generalisation of the distributional hypothesis of Harris (1985) , in which it is assumed that terms with a more general meaning will occur in a wider array of contexts, an idea later developed by Geffet and Dagan (2005) . Weeds et al. (2004) also found that frequency played a large role in determining the direction of entailment, with the more general term often occurring more frequently. The partial ordering of the vector lattice encapsulates these properties sincex \u2264\u0177 if and only if y occurs more frequently in all the contexts in which x occurs.",
"cite_spans": [
{
"start": 252,
"end": 272,
"text": "(Weeds et al., 2004)",
"ref_id": "BIBREF18"
},
{
"start": 328,
"end": 341,
"text": "Harris (1985)",
"ref_id": "BIBREF12"
},
{
"start": 474,
"end": 497,
"text": "Geffet and Dagan (2005)",
"ref_id": "BIBREF10"
},
{
"start": 500,
"end": 519,
"text": "Weeds et al. (2004)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Generality",
"sec_num": "2.1"
},
{
"text": "This partial ordering is a strict relationship, however, that is unlikely to exist between any two given vectors. Because of this, we define a degree of entailment",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Generality",
"sec_num": "2.1"
},
{
"text": "Ent(u, v) = u \u2227 v 1 u 1 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Generality",
"sec_num": "2.1"
},
{
"text": "This value has the properties of a conditional probability; in the case of u =x and v =\u0177 it is a measure of the degree to which the contexts string x occurs in are shared by the contexts string y occurs in.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Distributional Generality",
"sec_num": "2.1"
},
{
"text": "The map from strings to vectors already tells us everything we need to know about the composition of words: given two words x and y, we have their individual context vectorsx and\u0177, and the meaning of the string xy is represented by the vector xy. The question we address is what relationship should be imposed between the representation of the meanings of individual wordsx and\u0177 and the meaning of their composition xy. As it stands, we have little guidance on what maps from strings to context vectors are appropriate. The first restriction we propose is that vector representations of meanings should be composable in their own right, without consideration of what words they originated from. In fact we place a strong requirement on the nature of multiplication on elements: we require that the multiplication \u2022 on the vector space defines a lattice-ordered algebra. This means that multiplication is associative, distributive with respect to addition, and satisfies u \u2022 v \u2265 0 if u \u2265 0 and v \u2265 0, i.e. the product of positive elements is also positive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiplication",
"sec_num": "2.2"
},
{
"text": "We argue that composition of context vectors needs to be compatible with concatenation of words, i.e.x",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiplication",
"sec_num": "2.2"
},
{
"text": "\u2022\u0177 = xy,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiplication",
"sec_num": "2.2"
},
{
"text": "i.e. the map from strings to context vectors defines a semigroup homomorphism. Then the requirement that multiplication is associative can be seen to be a natural one since the homomorphism enforces this requirement for context vectors. Similarly since all context vectors are positive their product in the algebra must also be positive, thus it is natural to extend this to all elements of the algebra. The requirement for distributivity is justified by our own model of meaning as context in text corpora, described in full elsewhere.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multiplication",
"sec_num": "2.2"
},
{
"text": "The above requirements give us all we need to define a context theory. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Theory",
"sec_num": "2.3"
},
{
"text": "In this section we describe applications of the context-theoretic framework to applications in computational linguistics and natural language processing. We shall commonly use a construction in which there is a binary operation \u2022 on S that makes it a semigroup. In this case L 1 (S) is a lattice-ordered algebra with convolution as multiplication:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Theories for Natural Language",
"sec_num": "3"
},
{
"text": "(u \u2022 v)(r) = s\u2022t=r u(s)v(t)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Theories for Natural Language",
"sec_num": "3"
},
{
"text": "for r, s, t \u2208 S and u, v \u2208 L 1 (S). We denote the unit basis element associated with an element x \u2208 S by e x , that is e x (y) = 1 if and only if y = x, otherwise e x (y) = 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Context Theories for Natural Language",
"sec_num": "3"
},
{
"text": "A string x \u2208 A * is called a \"subsequence\" of y \u2208 A * if each element of x occurs in y in the same order, but with the possibility of other elements occurring in between, so for example abba is a subsequence of acabcba in {a, b, c} * . We denote the set of subsequences of x (including the empty string) by Sub(x). Subsequence matching compares the subsequences of two strings: the more subsequences they have in common the more similar they are assumed to be. This idea has been used successfully in text classification (Lodhi et al., 2002) and recognising textual entailment (Clarke, 2006) .",
"cite_spans": [
{
"start": 521,
"end": 541,
"text": "(Lodhi et al., 2002)",
"ref_id": "BIBREF13"
},
{
"start": 577,
"end": 591,
"text": "(Clarke, 2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Subsequence Matching",
"sec_num": "3.1"
},
{
"text": "We can describe such models using a context theory A,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subsequence Matching",
"sec_num": "3.1"
},
{
"text": "A * ,\u02c6, \u2022 , where \u2022 is convolution in L 1 (A * ) andx = (1/2 |x| ) y\u2208Sub(x) e y ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subsequence Matching",
"sec_num": "3.1"
},
{
"text": "i.e. the context vector of a string is a weighted sum of its subsequences. Under this context theoryx \u2264 y, i.e. x completely entails y if x is a subsequence of y.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subsequence Matching",
"sec_num": "3.1"
},
{
"text": "Many variations on this context theory are possible, for example using more complex mappings to L 1 (A * ). The context theory can also be adapted to incorporate a measure of lexical overlap between strings, an approach that, although simple, performs comparably to more complex techniques in tasks such as recognising textual entailment ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Subsequence Matching",
"sec_num": "3.1"
},
{
"text": "Glickman and Dagan 2005define their own model of entailment and apply it to the task of recognising textual entailment. They estimate entailment between words based on occurrences in documents: they estimate a lexical entailment probability LEP(x, y) between two terms x and y to be",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Entailment Model",
"sec_num": "3.2"
},
{
"text": "LEP(x, y) \u2243 n x,y n y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Entailment Model",
"sec_num": "3.2"
},
{
"text": "where n y and n x,y denote the number of documents that the word y occurs in and the words x and y both occur in respectively. We can describe this using a context theory A, D,\u02c6, \u2022 , where D is the set of documents, and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Entailment Model",
"sec_num": "3.2"
},
{
"text": "x(d) = 1 if x occurs in document d 0 otherwise. .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Entailment Model",
"sec_num": "3.2"
},
{
"text": "In this case the estimate of LEP(x, y) coincides with our own degree of entailment Ent(x, y).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Entailment Model",
"sec_num": "3.2"
},
{
"text": "There are many ways in which the multiplication \u2022 can be defined on L 1 (D). and N = 10 7 using a cutoff for the degree of entailment of 0.5 at which entailment was regarded as holding. CWS is the confidence weighted score -see for the definition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Entailment Model",
"sec_num": "3.2"
},
{
"text": "Glickman and Dagan (2005) do not use this measure, possibly because the problem of data sparseness makes it useless for long strings. However the measure they use can be viewed as an approximation to this context theory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Entailment Model",
"sec_num": "3.2"
},
{
"text": "We have also used this idea to determine entailment, using latent Dirichlet allocation to get around the problem of data sparseness. A model was built using a subset of around 380,000 documents from the Gigaword corpus, and the model was evaluated on the dataset from the first Recognising Textual Entailment Challenge; the results are shown in Table 1 . In order to use the model, a document length had to be chosen; it was found that very long documents yielded better performance at this task.",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 352,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Lexical Entailment Model",
"sec_num": "3.2"
},
{
"text": "In this section we describe how the relationships described by a taxonomy, the collection of isa relationships described by ontologies such as WordNet (Fellbaum, 1989) , can be embedded in the vector lattice structure that is crucial to the context-theoretic framework. This opens up the way to the possibility of new techniques that combine the vector-based representations of word meanings with the ontological ones, for example:",
"cite_spans": [
{
"start": 151,
"end": 167,
"text": "(Fellbaum, 1989)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Taxonomies",
"sec_num": "3.3"
},
{
"text": "\u2022 Semantic smoothing could be applied to vector based representations of an ontology, for example using distributional similarity measures to move words that are distributionally similar closer to each other in the vector space. This type of technique may allow the benefits of vector based techniques and ontologies to be combined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Taxonomies",
"sec_num": "3.3"
},
{
"text": "\u2022 Automatic classification: representing the taxonomy in a vector space may make it easier to look for relationships between the meanings in the taxonomy and meanings derived from vector based techniques such as latent semantic analysis, potentially aiding in classifying word meanings in a taxonomy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Taxonomies",
"sec_num": "3.3"
},
{
"text": "\u2022 The new vector representation could lead to new measures of semantic distance, for example, the L p norms can all be used to measure distance between the vector representations of meanings in a taxonomy. Moreover, the vector-based representation allows ambiguity to be represented by adding the weighted representations of individual senses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Taxonomies",
"sec_num": "3.3"
},
{
"text": "We assume that the is-a relation is a partial ordering; this is true for many ontologies. We wish to incorporate the partial ordering of the taxonomy into the partial ordering of the vector lattice. We will make use of the following result relating to partial orders: Definition 2 (Ideals). A lower set in a partially ordered set S is a set T such that for all x, y \u2208 S, if x \u2208 T and y \u2264 x then y \u2208 T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Taxonomies",
"sec_num": "3.3"
},
{
"text": "The principal ideal generated by an element x in a partially ordered set S is defined to be the lower set \uf8e6 (x) = {y \u2208 S : y \u2264 x}. We are also concerned with the probability of concepts. This is an idea that has come about through the introduction of \"distance measures\" on taxonomies (Resnik, 1995) . Since terms can be ascribed probabilities based on their frequencies of occurrence in corpora, the concepts they refer to can similarly be assigned probabilities. The probability of a concept is the probability of encountering an instance of that concept in the corpus, that is, the probability that a term selected at random from the corpus has a meaning that is subsumed by that particular concept. This ensures that more general concepts are given higher probabilities, for example if there is a most general concept (a top-most node in the taxonomy, which may correspond for example to \"entity\") its probability will be one, since every term can be considered an instance of that concept.",
"cite_spans": [
{
"start": 285,
"end": 299,
"text": "(Resnik, 1995)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Taxonomies",
"sec_num": "3.3"
},
{
"text": "We give a general definition based on this idea which does not require probabilities to be assigned based on corpus counts: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proposition 3 (Ideal Completion",
"sec_num": null
},
{
"text": "x\u2208S p(s) = 1. In this casep refers to the probability of a concept.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The taxonomy is called probabilistic if",
"sec_num": null
},
{
"text": "Thus in a probabilistic taxonomy, the function p corresponds to the probability that a term is observed whose meaning corresponds (in that context) to that concept. The functionp denotes the probability that a term is observed whose meaning in that context is subsumed by the concept.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The taxonomy is called probabilistic if",
"sec_num": null
},
{
"text": "Note that if S has a top element I then in the probabilistic case, clearlyp(I) = 1. In studies of distance measures on ontologies, the concepts in S often correspond to senses of terms, in this case the function p represents the (normalised) probability that a given term will occur with the sense indicated by the concept. The top-most concept often exists, and may be something with the meaning \"entity\"-intended to include the meaning of all concepts below it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The taxonomy is called probabilistic if",
"sec_num": null
},
{
"text": "The most simple completion we consider is into the vector lattice L 1 (S), with basis elements {e x : x \u2208 S}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The taxonomy is called probabilistic if",
"sec_num": null
},
{
"text": "Proposition 5 (Ideal Vector Completion). Let S be a probabilistic taxonomy with probability distribution function p that is non-zero everywhere on S. The function \u03c8 from S to L 1 (S) defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The taxonomy is called probabilistic if",
"sec_num": null
},
{
"text": "\u03c8(x) = y\u2208\u2193(x) p(y)e y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The taxonomy is called probabilistic if",
"sec_num": null
},
{
"text": "is a completion of the partial ordering of S under the vector lattice order of L 1 (S), satisfying \u03c8(x) 1 =p(x). Proof. The function \u03c8 is clearly order-preserving: if x \u2264 y in S then since \uf8e6 (x) \u2286 \uf8e6 (y) , necessarily \u03c8(x) \u2264 \u03c8(y). Conversely, the only way that \u03c8(x) \u2264 \u03c8(y) can be true is if \uf8e6 (x) \u2286 \uf8e6 (y) since p is non-zero everywhere. If this is the case, then x \u2264 y by the nature of the ideal completion. Thus \u03c8 is an order-embedding, and since L 1 (S) is a complete lattice, it is also a completion. Finally, note that \u03c8(",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The taxonomy is called probabilistic if",
"sec_num": null
},
{
"text": "x) 1 = y\u2208\u2193(x) p(y) =p(x).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The taxonomy is called probabilistic if",
"sec_num": null
},
{
"text": "This completion allows us to represent concepts as elements within a vector lattice so that not only the partial ordering of the taxonomy is preserved, but the probability of concepts is also preserved as the size of the vector under the L 1 norm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The taxonomy is called probabilistic if",
"sec_num": null
},
{
"text": "In this section we give a description link grammar (Sleator and Temperley, 1991) in terms of a context theory. Link grammar is a lexicalised syntactic formalism which describes properties of words in terms of links formed between them, and which is context-free in terms of its generative power; for the sake of brevity we omit the details, although a sample link grammar parse is show in Figure 3 .",
"cite_spans": [
{
"start": 51,
"end": 80,
"text": "(Sleator and Temperley, 1991)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 389,
"end": 397,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "Our formulation of link grammar as a context theory makes use of a construction called a free inverse semigroup. Informally, the free inverse semigroup on a set S is formed from elements of S and their inverses, S \u22121 = {s \u22121 : s \u2208 S}, satisfying no other condition than those of an inverse semigroup. Formally, the free inverse semigroup is defined in terms of a congruence relation on (S \u222a S \u22121 ) * specifying the inverse property and commutativity of idempotents -see (Munn, 1974) for details. We denote the free inverse semigroup on S by FIS(S).",
"cite_spans": [
{
"start": 470,
"end": 476,
"text": "(Munn,",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "Free inverse semigroups were shown by Munn (1974) to be equivalent to birooted word trees. A birooted word-tree on a set A is a directed acyclic graph whose edges are labelled by elements of A which does not contain any subgraphs of the form An element in the free semigroup FIS(S) is denoted as a sequence",
"cite_spans": [
{
"start": 38,
"end": 49,
"text": "Munn (1974)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "x d 1 1 x d 2 2 . . . x dn n where x i \u2208 S and d i \u2208 {1, \u22121}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "We construct the birooted word tree by starting with a single node as the start node, and for each i from 1 to n:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "\u2022 Determine if there is an edge labelled x i leaving the current node if d i = 1, or arriving at the current node if d i = \u22121.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "\u2022 If so, follow this edge and make the resulting node the current node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "\u2022 If not, create a new node and join it with an edge labelled x i in the appropriate direction, and make this node the current node.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "The finish node is the current node after the n iterations. The product of two elements x and y in the free inverse semigroup can be computed by finding the birooted word-tree of x and that of y, joining the graphs by equating the start node of y with the finish node of x (and making it a normal node), and merging any other nodes and edges necessary to remove any subgraphs of the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "\u2022 a \u2212\u2192 \u2022 a \u2190\u2212 \u2022 or \u2022 a \u2190\u2212 \u2022 a \u2212\u2192 \u2022.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "The inverse of an element has the same graph with start and finish nodes exchanged.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "We can represent parses of sentences in link grammar by translating words to syntactic categories in the free inverse semigroup. The parse shown earlier for \"they mashed their way through the thick mud\" can be represented in the inverse semigroup on S = {s, m, o, d, j, a} as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "ss \u22121 modd \u22121 o \u22121 m \u22121 jdaa \u22121 d \u22121 j \u22121",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "which has the following birooted word-tree (the words which the links derive from are shown in brackets): Let A be the set of words in the natural language under consideration, S be the set of link types. Then we can form a context theory A, FIS(S),\u02c6, \u2022 where \u2022 is multiplication defined by convolution on FIS(S), and a word a \u2208 A is mapped to a probabilistic sum\u00e2 of its link possible grammar representations (called disjuncts). Thus we have a context theory which maps a string x to elements of L 1 (FIS(S)); if there is a parse for this string then there will be some component of x which corresponds to an idempotent element of FIS(S). Moreover we can interpret the magnitude of the component as the probability of that particular parse, thus the context theory describes a probabilistic variation of link grammar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Syntax",
"sec_num": "3.4"
},
{
"text": "For the sake of brevity, we summarise our approach to representing uncertainty in logical semantics, which is described in full elsewhere. Our aim is to be able to reason with probabilistic information about uncertainty in logical semantics. For example, in order to represent a natural language sentence as a logical statement, it is necessary to parse it, which may well be with a statistical parser. We may have hundreds of possible parses and logical representations of a sentence, and associated probabilities. Alternatively, we may wish to describe our uncertainty about word-sense disambiguation in the representation. Incorporating such probabilistic information into the representation of meaning may lead to more robust systems which are able to cope when one component fails.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uncertainty in Logical Semantics",
"sec_num": "3.5"
},
{
"text": "The basic principle we propose is to first represent unambiguous logical statements as a context theory. Our uncertainty about the meaning of a sentence can then be represented as a probability distribution over logical statements, whether the uncertainty arises from parsing, word-sense disambiguation or any other source. Incorporating this information is then straightforward: the representation of the sentence is the weighted sum of the representation of each possible meaning, where the weights are given by the probability distribution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Uncertainty in Logical Semantics",
"sec_num": "3.5"
},
{
"text": "Computing the degree of entailment using this approach is computationally challenging, however we have shown that it is possible to estimate the degree of entailment by computing a lower bound on this value by calculating pairwise degrees of entailment for each possible logical statement. Mitchell and Lapata (2008) proposed a framework for composing meaning that is extremely general in nature: there is no requirement for linearity in the composition function, although in practice the authors do adopt this assumption. Indeed their \"multiplicative models\" require composition of two vectors to be a linear function of their tensor product; this is equivalent to our requirement of distributivity with respect to vector space addition.",
"cite_spans": [
{
"start": 290,
"end": 316,
"text": "Mitchell and Lapata (2008)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Uncertainty in Logical Semantics",
"sec_num": "3.5"
},
{
"text": "Various ways of composing vector based representations of meaning were investigated by Widdows (2008) , including the tensor product and direct sum. Both of these are compatible with the context theoretic framework since they are distributive with respect to the vector space addition. Clark et al. (2008) proposed a method of composing meaning that generalises Montague semantics; further work is required to determine how their method of composition relates to the contexttheoretic framework. Erk and Pado (2008) describe a method of composition that allows the incorporation of selectional preferences; again further work is required to determine the relation between this work and the context-theoretic framework.",
"cite_spans": [
{
"start": 87,
"end": 101,
"text": "Widdows (2008)",
"ref_id": "BIBREF19"
},
{
"start": 286,
"end": 305,
"text": "Clark et al. (2008)",
"ref_id": "BIBREF2"
},
{
"start": 495,
"end": 514,
"text": "Erk and Pado (2008)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "4"
},
{
"text": "We have given an introduction to the contexttheoretic framework, which provides mathematical guidelines on how vector-based representations of meaning should be composed, how entailment should be determined between these representations, and how probabilistic information should be incorporated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "We have shown how the framework can be applied to a wide range of problems in computational linguistics, including subsequence matching, vector based representations of taxonomies and statistical parsing. The ideas we have presented here are only a fraction of those described in full in (Clarke, 2007) , and we believe that even that is only the tip of the iceberg with regards to what it is possible to achieve with the framework.",
"cite_spans": [
{
"start": 288,
"end": 302,
"text": "(Clarke, 2007)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
}
],
"back_matter": [
{
"text": "I am very grateful to my supervisor David Weir for all his help in the development of these ideas, and to Rudi Lutz and the anonymous reviewers for many useful comments and suggestions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An Invitation to Operator Theory",
"authors": [
{
"first": "Y",
"middle": [
"A"
],
"last": "Abramovich",
"suffix": ""
},
{
"first": "Charalambos",
"middle": [
"D"
],
"last": "Aliprantis",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. A. Abramovich and Charalambos D. Aliprantis. 2002. An Invitation to Operator Theory. American Mathematical Society.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A compositional distributional model of meaning",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Bob",
"middle": [],
"last": "Coecke",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Second Symposium on Quantum Interaction",
"volume": "",
"issue": "",
"pages": "133--140",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Clark, Bob Coecke, and Mehrnoosh Sadrzadeh. 2008. A compositional distribu- tional model of meaning. In Proceedings of the Second Symposium on Quantum Interaction, Oxford, UK, pages 133-140.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Meaning as context and subsequence analysis for textual entailment",
"authors": [
{
"first": "Daoud",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Second PASCAL Recognising Textual Entailment Challenge",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daoud Clarke. 2006. Meaning as context and subse- quence analysis for textual entailment. In Proceed- ings of the Second PASCAL Recognising Textual En- tailment Challenge.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Context-theoretic Semantics for Natural Language: an Algebraic Framework",
"authors": [
{
"first": "Daoud",
"middle": [],
"last": "Clarke",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daoud Clarke. 2007. Context-theoretic Semantics for Natural Language: an Algebraic Framework. Ph.D. thesis, Department of Informatics, University of Sussex.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The pascal recognising textual entailment challenge",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Magnini",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the PASCAL Challenges Workshop on Recognising Textual Entailment",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In Proceedings of the PASCAL Chal- lenges Workshop on Recognising Textual Entail- ment.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Deerwester",
"suffix": ""
},
{
"first": "Susan",
"middle": [],
"last": "Dumais",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Furnas",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Landauer",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society for Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Deerwester, Susan Dumais, George Furnas, Thomas Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, 41(6):391-407.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A structured vector space model for word meaning in context",
"authors": [
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Pado",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katrin Erk and Sebastian Pado. 2008. A structured vector space model for word meaning in context. In Proceedings of EMNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "WordNet: An Electronic Lexical Database",
"authors": [],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christaine Fellbaum, editor. 1989. WordNet: An Elec- tronic Lexical Database. The MIT Press, Cam- bridge, Massachusetts.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Modes of meaning",
"authors": [
{
"first": "R",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Firth",
"suffix": ""
}
],
"year": 1957,
"venue": "Papers in Linguistics 1934-1951",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John R. Firth. 1957. Modes of meaning. In Papers in Linguistics 1934-1951. Oxford University Press, London.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "The distributional inclusion hypotheses and lexical entailment",
"authors": [
{
"first": "Maayan",
"middle": [],
"last": "Geffet",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maayan Geffet and Ido Dagan. 2005. The dis- tributional inclusion hypotheses and lexical entail- ment. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguistics (ACL'05), University of Michigan.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A probabilistic setting and lexical cooccurrence model for textual entailment",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Glickman",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2005,
"venue": "ACL-05 Workshop on Empirical Modeling of Semantic Equivalence and Entailment",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Glickman and Ido Dagan. 2005. A probabilis- tic setting and lexical cooccurrence model for tex- tual entailment. In ACL-05 Workshop on Empirical Modeling of Semantic Equivalence and Entailment.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Distributional structure",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1985,
"venue": "The Philosophy of Linguistics",
"volume": "",
"issue": "",
"pages": "26--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1985. Distributional structure. In Jer- rold J. Katz, editor, The Philosophy of Linguistics, pages 26-47. Oxford University Press.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Text classification using string kernels",
"authors": [
{
"first": "Huma",
"middle": [],
"last": "Lodhi",
"suffix": ""
},
{
"first": "Craig",
"middle": [],
"last": "Saunders",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Shawe-Taylor",
"suffix": ""
},
{
"first": "Nello",
"middle": [],
"last": "Cristianini",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Watkins",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Machine Learning Research",
"volume": "2",
"issue": "",
"pages": "419--444",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. Journal of Ma- chine Learning Research, 2:419-444.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Vector-based models of semantic composition",
"authors": [
{
"first": "Jeff",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "236--244",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236-244, Columbus, Ohio, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Free inverse semigroup. Proceedings of the",
"authors": [
{
"first": "W",
"middle": [
"D"
],
"last": "Munn",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "29",
"issue": "",
"pages": "385--404",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. D. Munn. 1974. Free inverse semigroup. Proceed- ings of the London Mathematical Society, 29:385- 404.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Using information content to evaluate semantic similarity in a taxonomy",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "448--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In IJ- CAI, pages 448-453.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Parsing english with a link grammar",
"authors": [
{
"first": "D",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "Davy",
"middle": [],
"last": "Sleator",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Temperley",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel D. Sleator and Davy Temperley. 1991. Pars- ing english with a link grammar. Technical Report CMU-CS-91-196, Department of Computer Sci- ence, Carnegie Mellon University.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Characterising measures of lexical distributional similarity",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "Diana",
"middle": [],
"last": "Mccarthy",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th International Conference of Computational Linguistics, COLING-2004",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds, David Weir, and Diana McCarthy. 2004. Characterising measures of lexical distributional similarity. In Proceedings of the 20th International Conference of Computational Linguistics, COLING- 2004, Geneva, Switzerland.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Semantic vector products: Some initial investigations",
"authors": [
{
"first": "Dominic",
"middle": [],
"last": "Widdows",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Second Symposium on Quantum Interaction, Oxford",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dominic Widdows. 2008. Semantic vector products: Some initial investigations. In Proceedings of the Second Symposium on Quantum Interaction, Ox- ford, UK.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Philosophical Investigations",
"authors": [
{
"first": "Ludwig",
"middle": [],
"last": "Wittgenstein",
"suffix": ""
}
],
"year": 1953,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ludwig Wittgenstein. 1953. Philosophical Investiga- tions. Macmillan, New York. G. Anscombe, trans- lator.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "The simplest one defines e d \u2022 e f = e d if d = f and e d e f = 0 otherwise. The effect of multiplication of the context vectors of two strings is then set intersection: (x\u2022\u0177)(d) = 1 if x and y occur in document d 0 otherwise.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Real Valued Taxonomy). A real valued taxonomy is a finite set S of concepts with a partial ordering \u2264 and a positive real function p over S. The measure of a concept is then defined in terms of p asp (x) = y\u2208\u2193(x) p(y).",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF2": {
"text": "Figure 2: A small example taxonomy extracted from WordNet (Fellbaum, 1989).",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF3": {
"text": "A link grammar parse. Link types: s: subject, o: object, m: modifying phrases, a: adjective, j: preposition, d: determiner.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF4": {
"text": ", together with two distinguished nodes, called the start node, 2 and finish node, \u2022.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF5": {
"text": "s(they, mashed) m(mashed, through) o(mashed, way) d(their, way) j(through, mud) d(the, mud) a(thick, mud)",
"uris": null,
"num": null,
"type_str": "figure"
}
}
}
}