Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "W07-0207",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T04:38:53.222265Z"
},
"title": "Latent Semantic Grammar Induction: Context, Projectivity, and Prior Distributions",
"authors": [
{
"first": "Andrew",
"middle": [
"M"
],
"last": "Olney",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Memphis Memphis",
"location": {
"postCode": "38152",
"region": "TN"
}
},
"email": "[email protected]"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents latent semantic grammars for the unsupervised induction of English grammar. Latent semantic grammars were induced by applying singular value decomposition to n-gram by context-feature matrices. Parsing was used to evaluate performance. Experiments with context, projectivity, and prior distributions show the relative performance effects of these kinds of prior knowledge. Results show that prior distributions, projectivity, and part of speech information are not necessary to beat the right branching baseline.",
"pdf_parse": {
"paper_id": "W07-0207",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents latent semantic grammars for the unsupervised induction of English grammar. Latent semantic grammars were induced by applying singular value decomposition to n-gram by context-feature matrices. Parsing was used to evaluate performance. Experiments with context, projectivity, and prior distributions show the relative performance effects of these kinds of prior knowledge. Results show that prior distributions, projectivity, and part of speech information are not necessary to beat the right branching baseline.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Unsupervised grammar induction (UGI) generates a grammar from raw text. It is an interesting problem both theoretically and practically. Theoretically, it connects to the linguistics debate on innate knowledge (Chomsky, 1957) . Practically, it has the potential to supersede techniques requiring structured text, like treebanks. Finding structure in text with little or no prior knowledge is therefore a fundamental issue in the study of language.",
"cite_spans": [
{
"start": 210,
"end": 225,
"text": "(Chomsky, 1957)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, UGI is still a largely unsolved problem. Recent work (Klein and Manning, 2002; Klein and Manning, 2004) has renewed interest by using a UGI model to parse sentences from the Wall Street Journal section of the Penn Treebank (WSJ). These parsing results are exciting because they demonstrate real-world applicability to English UGI. While other contemporary research in this area is promising, the case for real-world English UGI has not been as convincingly made (van Zaanen, 2000; Solan et al., 2005) .",
"cite_spans": [
{
"start": 62,
"end": 87,
"text": "(Klein and Manning, 2002;",
"ref_id": "BIBREF16"
},
{
"start": 88,
"end": 112,
"text": "Klein and Manning, 2004)",
"ref_id": "BIBREF17"
},
{
"start": 471,
"end": 489,
"text": "(van Zaanen, 2000;",
"ref_id": "BIBREF27"
},
{
"start": 490,
"end": 509,
"text": "Solan et al., 2005)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper weaves together two threads of inquiry. The first thread is latent semantics, which have not been previously used in UGI. The second thread is dependency-based UGI, used by Klein and Manning (2004) , which nicely dovetails with our semantic approach. The combination of these threads allows some exploration of what characteristics are sufficient for UGI and what characteristics are necessary.",
"cite_spans": [
{
"start": 184,
"end": 208,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work has focused on syntax to the exclusion of semantics (Brill and Marcus, 1992; van Zaanen, 2000; Klein and Manning, 2002; Paskin, 2001 ; Klein and Manning, 2004; Solan et al., 2005) . However, results from the speech recognition community show that the inclusion of latent semantic information can enhance the performance of their models (Coccaro and Jurafsky, 1998; Bellegarda, 2000; Deng and Khudanpur, 2003) . Using latent semantic information to improve UGI is therefore both novel and relevant.",
"cite_spans": [
{
"start": 66,
"end": 90,
"text": "(Brill and Marcus, 1992;",
"ref_id": "BIBREF3"
},
{
"start": 91,
"end": 108,
"text": "van Zaanen, 2000;",
"ref_id": "BIBREF27"
},
{
"start": 109,
"end": 133,
"text": "Klein and Manning, 2002;",
"ref_id": "BIBREF16"
},
{
"start": 134,
"end": 146,
"text": "Paskin, 2001",
"ref_id": "BIBREF23"
},
{
"start": 149,
"end": 173,
"text": "Klein and Manning, 2004;",
"ref_id": "BIBREF17"
},
{
"start": 174,
"end": 193,
"text": "Solan et al., 2005)",
"ref_id": "BIBREF26"
},
{
"start": 350,
"end": 378,
"text": "(Coccaro and Jurafsky, 1998;",
"ref_id": "BIBREF6"
},
{
"start": 379,
"end": 396,
"text": "Bellegarda, 2000;",
"ref_id": "BIBREF0"
},
{
"start": 397,
"end": 422,
"text": "Deng and Khudanpur, 2003)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent semantics",
"sec_num": "2"
},
{
"text": "The latent semantic information used by the speech recognition community above is produced by latent semantic analysis (LSA), also known as latent semantic indexing (Deerwester et al., 1990; . LSA creates a semantic representation of both words and collections of words in a vector space, using a two part process. First, a term by document matrix is created in which the frequency of word w i in document d j is the value of cell c ij . Filters may be applied during this process which eliminate undesired terms, e.g. common words. Weighting may also be applied to decrease the contributions of frequent words (Dumais, 1991) . Secondly, singular value decomposition (SVD) is applied to the term by document matrix. The resulting matrix decomposition has the property that the removal of higher-order dimensions creates an optimal reduced representation of the original matrix in the least squares sense (Berry et al., 1995) . Therefore, SVD performs a kind of dimensionality reduction such that words appearing in different documents can acquire similar row vector representations (Landauer and Dumais, 1997) . Words can be compared by taking the cosine of their corresponding row vectors. Collections of words can likewise be compared by first adding the corresponding row vectors in each collection, then taking the cosine between the two collection vectors.",
"cite_spans": [
{
"start": 165,
"end": 190,
"text": "(Deerwester et al., 1990;",
"ref_id": "BIBREF10"
},
{
"start": 611,
"end": 625,
"text": "(Dumais, 1991)",
"ref_id": "BIBREF12"
},
{
"start": 904,
"end": 924,
"text": "(Berry et al., 1995)",
"ref_id": "BIBREF1"
},
{
"start": 1096,
"end": 1109,
"text": "Dumais, 1997)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent semantics",
"sec_num": "2"
},
{
"text": "A stumbling block to incorporating LSA into UGI is that grammars are inherently ordered but LSA is not. LSA is unordered because the sum of vectors is the same regardless of the order in which they were added. The incorporation of word order into LSA has never been successfully carried out before, although there have been attempts to apply word order post-hoc to LSA (Wiemer-Hastings and Zipitria, 2001) . A straightforward notion of incorporating word order into LSA is to use n-grams instead of individual words. In this way a unigram, bigram, and trigram would each have an atomic vector representation and be directly comparable.",
"cite_spans": [
{
"start": 369,
"end": 405,
"text": "(Wiemer-Hastings and Zipitria, 2001)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent semantics",
"sec_num": "2"
},
{
"text": "It may seem counterintuitive that such an n-gram scheme has never been used in conjunction with LSA. Simple as this scheme may be, it quickly falls prey to memory limitations of modern day computers for computing the SVD. The standard for computing the SVD in the NLP sphere is Berry (1992)'s SVDPACK, whose single vector Lanczos recursion method with re-orthogonalization was incorporated into the BellCore LSI tools. Subsequently, either SVDPACK or the LSI tools were used by the majority of researchers in this area (Sch\u00fctze, 1995; Landauer and Dumais, 1997; Coccaro and Jurafsky, 1998; Bellegarda, 2000; Deng and Khudanpur, 2003) . Using John likes string cheese. Larsen (1998) , a standard orthogonal SVD of a unigram/bigram by sentence matrix of the LSA Touchstone Applied Science Associates Corpus (Landauer et al., 1998) requires over 60 gigabytes of random access memory. This estimate is prohibitive for all but current supercomputers.",
"cite_spans": [
{
"start": 519,
"end": 534,
"text": "(Sch\u00fctze, 1995;",
"ref_id": "BIBREF25"
},
{
"start": 535,
"end": 561,
"text": "Landauer and Dumais, 1997;",
"ref_id": "BIBREF19"
},
{
"start": 562,
"end": 589,
"text": "Coccaro and Jurafsky, 1998;",
"ref_id": "BIBREF6"
},
{
"start": 590,
"end": 607,
"text": "Bellegarda, 2000;",
"ref_id": "BIBREF0"
},
{
"start": 608,
"end": 633,
"text": "Deng and Khudanpur, 2003)",
"ref_id": "BIBREF11"
},
{
"start": 668,
"end": 681,
"text": "Larsen (1998)",
"ref_id": "BIBREF21"
},
{
"start": 798,
"end": 828,
"text": "Corpus (Landauer et al., 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent semantics",
"sec_num": "2"
},
{
"text": "However, it is possible to use a non-orthogonal SVD approach with significant memory savings (Cullum and Willoughby, 2002) . A non-orthogonal approach creates the same matrix decomposition as traditional approaches, but the resulting memory savings allow dramatically larger matrix decompositions. Thus a non-orthongonal SVD approach is key to the inclusion of ordered latent semantics into our UGI model.",
"cite_spans": [
{
"start": 93,
"end": 122,
"text": "(Cullum and Willoughby, 2002)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Latent semantics",
"sec_num": "2"
},
{
"text": "Dependency structures are an ideal grammar representation for evaluating UGI. Because dependency structures have no higher order nodes, e.g. NP, their evaluation is simple: one may compare with a reference parse and count the proportion of correct dependencies. For example, Figure 1 has three dependencies {( John, likes ), ( cheese, likes ), ( string, cheese ) }, so the trial parse {( John, likes ), ( string, likes ), ( cheese, string )} has 1/3 directed dependencies correct and 2/3 undirected dependencies correct. This metric avoids the biases created by bracketing, where over-generation or undergeneration of brackets may cloud actual performance (Carroll et al., 2003) . Dependencies are equivalent with lexicalized trees (see Figures 1 and 2 ) so long as the dependencies are projective. Dependencies are projective when all heads and their dependents are a contiguous sequence.",
"cite_spans": [
{
"start": 656,
"end": 678,
"text": "(Carroll et al., 2003)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [
{
"start": 275,
"end": 283,
"text": "Figure 1",
"ref_id": null
},
{
"start": 737,
"end": 752,
"text": "Figures 1 and 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dependency grammars",
"sec_num": "3"
},
{
"text": "Dependencies have been used for UGI before with mixed success (Paskin, 2001; Klein and Manning, 2004) . Paskin (2001) created a projective model using words, and he evaluated on WSJ. Although he reported beating the random baseline for that task, both Klein and Manning (2004) and we have repli-",
"cite_spans": [
{
"start": 62,
"end": 76,
"text": "(Paskin, 2001;",
"ref_id": "BIBREF23"
},
{
"start": 77,
"end": 101,
"text": "Klein and Manning, 2004)",
"ref_id": "BIBREF17"
},
{
"start": 252,
"end": 276,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency grammars",
"sec_num": "3"
},
{
"text": "S likes NP John",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency grammars",
"sec_num": "3"
},
{
"text": "VP likes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dependency grammars",
"sec_num": "3"
},
{
"text": "John likes NP cheese string cheese Figure 2 : A Lexicalized Tree cated the random baseline above Paskin's results. Klein and Manning (2004) , on the other hand, have handily beaten a random baseline using a projective model over part of speech tags and evaluating on a subset of WSJ, WSJ10.",
"cite_spans": [
{
"start": 115,
"end": 139,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dependency grammars",
"sec_num": "3"
},
{
"text": "There are several unanswered questions in dependency-based English UGI. Some of these may be motivated from the Klein and Manning (2004) model, while others may be motivated from research efforts outside the UGI community. Altogether, these questions address what kinds of prior knowledge are, or are not necessary for successful UGI. Klein and Manning (2004) used part of speech tags as basic elements instead of words. Although this move can be motivated on data sparsity grounds, it is somewhat at odds with the lexicalized nature of dependency grammars. Since Paskin (2001)'s previous attempt using words as basic elements was unsuccessful, it is not clear whether parts of speech are necessary prior knowledge in this context.",
"cite_spans": [
{
"start": 112,
"end": 136,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF17"
},
{
"start": 335,
"end": 359,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unanswered questions",
"sec_num": "4"
},
{
"text": "Projectivity is an additional constraint that may not be necessary for successful UGI. English is a projective language, but other languages, such as Bulgarian, are not (Pericliev and Ilarionov, 1986) . Nonprojective UGI has not previously been studied, and it is not clear how important projectivity assumptions are to English UGI. Figure 3 gives an example of a nonprojective construction: not all heads and their dependents are a contiguous sequence.",
"cite_spans": [
{
"start": 169,
"end": 200,
"text": "(Pericliev and Ilarionov, 1986)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 333,
"end": 341,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Projectivity",
"sec_num": "4.2"
},
{
"text": "John string likes cheese. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Projectivity",
"sec_num": "4.2"
},
{
"text": "The core of several UGI approaches is distributional analysis (Brill and Marcus, 1992; van Zaanen, 2000; Klein and Manning, 2002; Paskin, 2001 ; Klein and Manning, 2004; Solan et al., 2005) . The key idea in such distributional analysis is that the function of a word may be known if it can be substituted for another word (Harris, 1954) . If so, both words have the same function. Substitutability must be defined over a context. In UGI, this context has typically been the preceding and following words of the target word. However, this notion of context has an implicit assumption of word order. This assumption is true for English, but is not true for other languages such as Latin. Therefore, it is not clear how dependent English UGI is on local linear context, e.g. preceding and following words, or whether an unordered notion of context would also be effective. Klein and Manning (2004) point their model in the right direction by initializing the probability of dependencies inversely proportional to the distance between the head and the dependent. This is a very good initialization: Figure 4 shows the actual distances for the dataset used, WSJ10. Klein (2005) states that, \"It should be emphasized that this initialization was important in getting reasonable patterns out of this model.\" (p. 89). However, it is not clear that this is necessarily true for all UGI models.",
"cite_spans": [
{
"start": 62,
"end": 86,
"text": "(Brill and Marcus, 1992;",
"ref_id": "BIBREF3"
},
{
"start": 87,
"end": 104,
"text": "van Zaanen, 2000;",
"ref_id": "BIBREF27"
},
{
"start": 105,
"end": 129,
"text": "Klein and Manning, 2002;",
"ref_id": "BIBREF16"
},
{
"start": 130,
"end": 142,
"text": "Paskin, 2001",
"ref_id": "BIBREF23"
},
{
"start": 145,
"end": 169,
"text": "Klein and Manning, 2004;",
"ref_id": "BIBREF17"
},
{
"start": 170,
"end": 189,
"text": "Solan et al., 2005)",
"ref_id": "BIBREF26"
},
{
"start": 323,
"end": 337,
"text": "(Harris, 1954)",
"ref_id": "BIBREF14"
},
{
"start": 871,
"end": 895,
"text": "Klein and Manning (2004)",
"ref_id": "BIBREF17"
},
{
"start": 1161,
"end": 1173,
"text": "Klein (2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 1096,
"end": 1104,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Context",
"sec_num": "4.3"
},
{
"text": "Semantics have not been included in previous UGI models, despite successful application in the speech recognition community (see Section 2). However, there have been some related efforts in unsupervised part of speech induction (Sch\u00fctze, 1995) . These efforts have used SVD as a dimensionality reduction step between distributional analysis and clustering. Although not labelled as \"semantic\" this work has produced the best unsupervised part of speech induction results. Thus our last question is whether SVD can be applied to a UGI model to improve results.",
"cite_spans": [
{
"start": 228,
"end": 243,
"text": "(Sch\u00fctze, 1995)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics",
"sec_num": "4.5"
},
{
"text": "The WSJ10 dataset was used for evaluation to be comparable to previous results (Klein and Manning, 2004) . WSJ10 is a subset of the Wall Street Journal section of the Penn Treebank, containing only those sentences of 10 words or less after punctuation has been removed. WSJ10 contains 7422 sentences. To counteract the data sparsity encountered by using ngrams instead of parts of speech, we used the entire WSJ and year 1994 of the North American News Text Corpus. These corpora were formatted according to the same rules as the WSJ10, split into sentences (as documents) and concatenated. The combined corpus contained roughly 10 million words and 460,000 sentences.",
"cite_spans": [
{
"start": 79,
"end": 104,
"text": "(Klein and Manning, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Materials",
"sec_num": "5.1"
},
{
"text": "Dependencies, rather than the original bracketing, were used as the gold standard for parsing performance. Since the Penn Treebank does not label dependencies, it was necessary to apply rules to extract dependencies from WSJ10 (Collins, 1999) .",
"cite_spans": [
{
"start": 227,
"end": 242,
"text": "(Collins, 1999)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Materials",
"sec_num": "5.1"
},
{
"text": "The first step is unsupervised latent semantic grammar induction. This was accomplished by first creating n-gram by context feature matrices, where the feature varies as per Section 4.3. The Context global approach uses a bigram by document matrix such that word order is eliminated. Therefore the value of cell ij is the number of times ngram i occurred in document j . The matrix had approximate dimensions 2.2 million by 460,000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "5.2"
},
{
"text": "The Context local approach uses a bigram by local window matrix. If there are n distinct unigrams in the corpus, the first n columns contain the counts of the words preceding a target word, and the last n columns contain the counts of the words following a target word. For example, the value of at cell ij is the number of times unigram j occurred before the target ngram i . The value of cell i(j+n) is the number of times unigram j occurred after the target ngram i . The matrix had approximate dimensions 2.2 million by 280,000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "5.2"
},
{
"text": "After the matrices were constructed, each was transformed using SVD. Because the nonorthogonal SVD procedure requires a number of Lanczos steps approximately proportional to the square of the number of dimensions desired, the number of dimensions was limited to 100. This kept running time and storage requirements within reasonable limits, approximately 4 days and 120 gigabytes of disk storage to create each.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "5.2"
},
{
"text": "Next, a parsing table was constructed. For each bigram, the closest unigram neighbor, in terms of cosine, was found, cf. Brill and Marcus (1992) . The neighbor, cosine to that neighbor, and cosines of the bigram's constituents to that neighbor were stored. The constituent with the highest cosine to the neighbor was considered the likely head, based on classic head test arguments (Hudson, 1987) . This data was stored in a lookup table so that for each bigram the associated information may be found in constant time.",
"cite_spans": [
{
"start": 121,
"end": 144,
"text": "Brill and Marcus (1992)",
"ref_id": "BIBREF3"
},
{
"start": 382,
"end": 396,
"text": "(Hudson, 1987)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "5.2"
},
{
"text": "Next, the WSJ10 was parsed using the parsing table described above and a minimum spanning tree algorithm for dependency parsing (McDonald et al., 2005) . Each input sentence was tokenized on whitespace and lowercased. Moving from left to right, each word was paired with all remaining words on its right. If a pair existed in the parsing table, the associated information was retrieved. This information was used to populate the fully connected graph that served as input to the minimum spanning tree algorithm. Specifically, when a pair was retrieved from the parsing table, the arc from the stored head to the dependent was given a weight equal to the cosine between the head and the nearest unigram neighbor for that bigram pair. Likewise the arc from the dependent to the head was given a weight equal to the cosine between the dependent and the nearest unigram neighbor for that bigram pair. Thus the weight on each arc was based on the degree of substitutability between that word and the nearest unigram neighbor for the bigram pair.",
"cite_spans": [
{
"start": 128,
"end": 151,
"text": "(McDonald et al., 2005)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "5.2"
},
{
"text": "If a bigram was not in the parsing table, it was given maximum weight, making that dependency maximally unlikely. After all the words in the sentence had been processed, the average of all current weights was found, and this average was used as the weight from a dummy root node to all other nodes (the dummy ROOT is further motivated in Section 5.3). Therefore all words were given equal likelihood of being the root of the sentence. The end result of this graph construction process is an n by n + 1 matrix, where n is the number of words and there is one dummy root node. Then this graph was input to the minimum spanning tree algorithm. The output of this algorithm is a non-projective dependency tree, which was directly compared to the gold standard dependency tree, as well as the respective baselines discussed in Section 5.3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "5.2"
},
{
"text": "To gauge the differential effects of projectivity and prior knowledge, the above procedure was modified in additional evaluation trials. Projectivity was incorporated by using a bottom-up algorithm (Covington, 2001 ). The algorithm was applied in two stages. First, it was applied using the nonprojective parse as input. By comparing the output parse to the original nonprojective parse, it is possible to identify independent words that could not be incorporated into the projective parse. In the second stage, the projective algorithm was run again on the nonprojective input, except this time the independent words were allowed to link to any other words defined by the parsing table. In other words, the first stage identifies unattached words, and the second stage \"repairs\" the words by finding a projective attachment for them. This method of enforcing projectivity was chosen because it makes use of the same information as the nonprojective method, but it goes a step further to enforce projectivity.",
"cite_spans": [
{
"start": 198,
"end": 214,
"text": "(Covington, 2001",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Procedure",
"sec_num": "5.2"
},
{
"text": "Prior distributions of dependencies, as depicted in Figure 4 , were incorporated by inversely weighting ROOT John likes string cheese Figure 6 : Left Branching Baseline graph edges by the distance between words. This modification transparently applies to both the nonprojective case and the projective case.",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 60,
"text": "Figure 4",
"ref_id": "FIGREF1"
},
{
"start": 134,
"end": 142,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Procedure",
"sec_num": "5.2"
},
{
"text": "Two performance baselines for dependency parsing were used in this experiment, the so-called right and left branching baselines. A right branching baseline predicts that the head of each word is the word to the left, forming a chain from left to right. An example is given in Figure 5 . Conversely, a left branching baseline predicts that the head of each word is the word to the right, forming a chain from right to left. An example is given in Figure 6 . Although perhaps not intuitively very powerful baselines, the right and left branching baselines can be very effective for the WSJ10. For WSJ10, most heads are close to their dependents, as shown in Figure 4 . For example, the percentage of dependencies with a head either immediately to the right or left is 53%. Of these neighboring heads, 17% are right branching, and 36% are left branching.",
"cite_spans": [],
"ref_spans": [
{
"start": 276,
"end": 284,
"text": "Figure 5",
"ref_id": "FIGREF2"
},
{
"start": 446,
"end": 454,
"text": "Figure 6",
"ref_id": null
},
{
"start": 656,
"end": 664,
"text": "Figure 4",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Scoring",
"sec_num": "5.3"
},
{
"text": "By using the sign test, the statistical significance of parsing results can be determined. The sign test is perhaps the most basic non-parametric tests and so is useful for this task because it makes no assumptions regarding the underlying distribution of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "5.3"
},
{
"text": "Consider each sentence. Every word must have exactly one head. That means that for n words, there is a 1/n chance of selecting the correct head (excluding self-heads and including a dummy root head). If all dependencies in a sentence are independent, then a sentence's dependencies follow a binomial distribution, with n equal to the number of words, p equal to 1/n, and k equal to the number of correct dependencies. From this it follows that the expected number of correct dependencies per sentence is np, or 1. Thus the random baseline for nonprojective depen-dency parsing performance is one dependency per sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "5.3"
},
{
"text": "Using the gold standard of the WSJ10, the number of correct dependencies found by the latent semantic model can be established. The null hypothesis is that one randomly generated dependency should be correct per sentence. Suppose that r + sentences have more correct dependencies and r \u2212 sentences have fewer correct dependencies (i.e. 0). Under the null hypothesis, half of the values should be above 1 and half below, so p = 1/2. Since signed difference is being considered, sentences with dependencies equal to 1 are excluded. The corresponding binomial distribution of the signs to calculate whether the model is better than chance is b(n, p) = b(r + + r \u2212 , 1/2). The corresponding p-value may be calculated using Equation 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "5.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 \u2212 r + \u22121 k=0 n! k!(n \u2212 k)! 1/2(1/2) n\u2212k",
"eq_num": "(1)"
}
],
"section": "Scoring",
"sec_num": "5.3"
},
{
"text": "This same method can be used for determining statistically significant improvement over right and left branching baselines. For each sentence, the difference between the number of correct dependencies in the candidate parse and the number of correct dependencies in the baseline may be calculated. The number of positive and negative signed differences are counted as r + and r \u2212 , respectively, and the procedure for calculating statistically significant improvement is the same.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Scoring",
"sec_num": "5.3"
},
{
"text": "Each model in Table 6 has significantly better performance than item above using statistical procedure described in Section 5.2. A number of observations can be drawn from this table. First, all the models outperform random and right branching baselines. This is the first time we are aware of that this has been shown with lexical items in dependency UGI. Secondly, local context outperforms global context. This is to be expected given the relatively fixed word order in English, but it is somewhat surprising that the differences between local and global are not greater. Thirdly, it is clear that the addition of prior knowledge, whether projectivity or prior distributions, improves performance. Fourthly, ",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 21,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The results in Section 6 address the unanswered questions identified in Section 4, i.e. parts of speech, semantics, context, projectivity, and prior distributions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "The most salient result in Section 6 is successful UGI without part of speech tags. As far as we know, this is the first time dependency UGI has been successful without the hidden syntactic structure provided by part of speech tags. It is interesting to note that latent semantic grammars improve upon Paskin (2001) , even though that model is projective. It appears that lexical semantics are the reason. Thus these results address two of the unanswered questions from Section 6 regarding parts of speech and semantics. Semantics improve dependency UGI. In fact, they improve dependency UGI so much so that parts of speech are not necessary to beat a right branching baseline.",
"cite_spans": [
{
"start": 302,
"end": 315,
"text": "Paskin (2001)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Context has traditionally been defined locally, e.g. the preceding and following word(s). The results above indicate that a global definition of context is also effective, though not quite as highly performing as a local definition on the WSJ10. This suggests that English UGI is not dependent on local linear context, and it motivates future exploration of word-order free languages using global context. It is also interesting to note that the differences between global and local contexts begin to disappear as projectivity and prior distributions are added. This suggests that there is a certain level of equivalence between a global context model that favors local attachments and a local context model that has no attachment bias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Projectivity has been assumed in previous cases of English UGI (Klein and Manning, 2004; Paskin, 2001) . As far as we know, this is the first time a nonprojective model has outperformed a random or right branching baseline. It is interesting that a nonprojective model can do so well when it assumes so little about the structure of a language. Even more interesting is that the addition of projectivity to the models above increases performance only slightly. It is tempting to speculate that projectivity may be something of a red herring for English dependency parsing, cf. McDonald et al. (2005) .",
"cite_spans": [
{
"start": 63,
"end": 88,
"text": "(Klein and Manning, 2004;",
"ref_id": "BIBREF17"
},
{
"start": 89,
"end": 102,
"text": "Paskin, 2001)",
"ref_id": "BIBREF23"
},
{
"start": 577,
"end": 599,
"text": "McDonald et al. (2005)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Prior distributions have been previously assumed as well (Klein and Manning, 2004) . The differential effect of prior distributions in previous work has not been clear. Our results indicate that a prior distribution will increase performance. However, as with projectivity, it is interesting how well the models perform without this prior knowledge and how slight an increase this prior knowledge gives. Overall, the prior distribution used in the evaluation is not necessary to beat the right branching baseline.",
"cite_spans": [
{
"start": 57,
"end": 82,
"text": "(Klein and Manning, 2004)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "Projectivity and prior distributions have significant overlap when the prior distribution favors closer attachments. Projectivity, by forcing a head to govern a contiguous subsequence, also favors closer attachments. The results reported in Section 6 suggest that there is a great deal of overlap in the benefit provided by projectivity and the prior distribution used in the evaluation. Either one or the other produces significant benefits, but the combination is much less impressive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "It is worthwhile to reiterate the sparseness of prior knowledge contained in the basic model used in these evaluations. There are essentially four components of prior knowledge. First, the ability to create an ngram by context feature matrix. Secondly, the application of SVD to that matrix. Thirdly, the creation of a fully connected dependency graph from the post-SVD matrix. And finally, the extraction of a minimum spanning tree from this graph. Al-though we have not presented evaluation on wordorder free languages, the basic model just described has no obvious bias against them. We expect that latent semantic grammars capture some of the universals of grammar induction. A fuller exploration and demonstration is the subject of future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "7"
},
{
"text": "This paper presented latent semantic grammars for the unsupervised induction of English grammar. The creation of latent semantic grammars and their application to parsing were described. Experiments with context, projectivity, and prior distributions showed the relative performance effects of these kinds of prior knowledge. Results show that assumptions of prior distributions, projectivity, and part of speech information are not necessary for this task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "8"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Large vocabulary speech recognition with multispan statistical language models",
"authors": [
{
"first": "R",
"middle": [],
"last": "Jerome",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bellegarda",
"suffix": ""
}
],
"year": 2000,
"venue": "IEEE Transactions on Speech and Audio Processing",
"volume": "8",
"issue": "1",
"pages": "76--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerome R. Bellegarda. 2000. Large vocabulary speech recognition with multispan statistical language mod- els. IEEE Transactions on Speech and Audio Process- ing, 8(1):76-84.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Using linear algebra for intelligent information retrieval",
"authors": [
{
"first": "Michael",
"middle": [
"W"
],
"last": "Berry",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "Gavin",
"middle": [
"W"
],
"last": "O'brien",
"suffix": ""
}
],
"year": 1995,
"venue": "Society for Industrial and Applied Mathematics Review",
"volume": "37",
"issue": "4",
"pages": "573--595",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael W. Berry, Susan T. Dumais, and Gavin W. O'Brien. 1995. Using linear algebra for intelligent in- formation retrieval. Society for Industrial and Applied Mathematics Review, 37(4):573-595.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Large scale singular value computations",
"authors": [
{
"first": "Michael",
"middle": [
"W"
],
"last": "Berry",
"suffix": ""
}
],
"year": 1992,
"venue": "International Journal of Supercomputer Applications",
"volume": "6",
"issue": "1",
"pages": "13--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael W. Berry. 1992. Large scale singular value com- putations. International Journal of Supercomputer Applications, 6(1):13-49.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Automatically acquiring phrase structure using distributional analysis",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Brill",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
}
],
"year": 1992,
"venue": "Speech and Natural Language: Proceedings of a Workshop Held at",
"volume": "",
"issue": "",
"pages": "155--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Brill and Mitchell Marcus. 1992. Automatically acquiring phrase structure using distributional analy- sis. In Speech and Natural Language: Proceedings of a Workshop Held at Harriman, New York, pages 155-160, Philadelphia, February 23-26. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Parser evaluation using a grammatical relation annotation scheme",
"authors": [
{
"first": "John",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "Guido",
"middle": [],
"last": "Minnen",
"suffix": ""
},
{
"first": "Ted",
"middle": [],
"last": "Briscoe",
"suffix": ""
}
],
"year": 2003,
"venue": "Treebanks: Building and Using Syntactically Annotated Corpora, chapter 17",
"volume": "",
"issue": "",
"pages": "299--316",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Carroll, Guido Minnen, and Ted Briscoe. 2003. Parser evaluation using a grammatical relation anno- tation scheme. In A. Abeill, editor, Treebanks: Build- ing and Using Syntactically Annotated Corpora, chap- ter 17, pages 299-316. Kluwer, Dordrecht.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Syntactic Structures",
"authors": [
{
"first": "Noam",
"middle": [],
"last": "Chomsky",
"suffix": ""
}
],
"year": 1957,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noam Chomsky. 1957. Syntactic Structures. Mouton, The Hague.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Towards better integration of semantic predictors in statistical language modeling",
"authors": [
{
"first": "Noah",
"middle": [],
"last": "Coccaro",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the International Conference on Spoken Language Processing",
"volume": "",
"issue": "",
"pages": "2403--2406",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Noah Coccaro and Daniel Jurafsky. 1998. Towards bet- ter integration of semantic predictors in statistical lan- guage modeling. In Proceedings of the International Conference on Spoken Language Processing, pages 2403-2406, Piscataway, NJ, 30th November-4th De- cember. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Head-Driven Statistical Models for Natural Language Parsing",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. thesis, Univer- sity of Pennsylvania.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "A fundamental algorithm for dependency parsing",
"authors": [
{
"first": "Michael",
"middle": [
"A"
],
"last": "Covington",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Association for Computing Machinery Southeast Conference",
"volume": "",
"issue": "",
"pages": "95--102",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael A. Covington. 2001. A fundamental algorithm for dependency parsing. In John A. Miller and Jef- fery W. Smith, editors, Proceedings of the 39th Annual Association for Computing Machinery Southeast Con- ference, pages 95-102, Athens, Georgia.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Lanczos Algorithms for Large Symmetric Eigenvalue Computations",
"authors": [
{
"first": "Jane",
"middle": [
"K"
],
"last": "Cullum",
"suffix": ""
},
{
"first": "Ralph",
"middle": [
"A"
],
"last": "Willoughby",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jane K. Cullum and Ralph A. Willoughby. 2002. Lanc- zos Algorithms for Large Symmetric Eigenvalue Com- putations, Volume 1: Theory. Society for Industrial and Applied Mathematics, Philadelphia.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "C",
"middle": [],
"last": "Scott",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Deerwester",
"suffix": ""
},
{
"first": "George",
"middle": [
"W"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "Richard",
"middle": [
"A"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society of Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott C. Deerwester, Susan T. Dumais, George W. Fur- nas, Thomas K. Landauer, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of the American Society of Information Science, 41(6):391-407.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Latent semantic information in maximum entropy language models for conversational speech recognition",
"authors": [
{
"first": "Yonggang",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Khudanpur",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "56--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonggang Deng and Sanjeev Khudanpur. 2003. La- tent semantic information in maximum entropy lan- guage models for conversational speech recognition. In Proceedings of Human Language Technology Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics, pages 56-63, Philadelphia, May 27-June 1. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Improving the retrieval of information from external sources",
"authors": [
{
"first": "Susan",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1991,
"venue": "Instruments and Computers",
"volume": "23",
"issue": "2",
"pages": "229--236",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Susan Dumais. 1991. Improving the retrieval of informa- tion from external sources. Behavior Research Meth- ods, Instruments and Computers, 23(2):229-236.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The measurement of textual coherence with latent semantic analysis",
"authors": [
{
"first": "W",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Walter",
"middle": [],
"last": "Foltz",
"suffix": ""
},
{
"first": "Thomas",
"middle": [
"K"
],
"last": "Kintsch",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Landauer",
"suffix": ""
}
],
"year": 1998,
"venue": "Discourse Processes",
"volume": "25",
"issue": "",
"pages": "285--308",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter W. Foltz, Walter Kintsch, and Thomas K. Lan- dauer. 1998. The measurement of textual coherence with latent semantic analysis. Discourse Processes, 25(2&3):285-308.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributional structure. Word",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "140--162",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1954. Distributional structure. Word, 10:140-162.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Zwicky on heads",
"authors": [
{
"first": "Richard",
"middle": [
"A"
],
"last": "Hudson",
"suffix": ""
}
],
"year": 1987,
"venue": "Journal of Linguistics",
"volume": "23",
"issue": "",
"pages": "109--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard A. Hudson. 1987. Zwicky on heads. Journal of Linguistics, 23:109-132.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A generative constituent-context model for improved grammar induction",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2002. A genera- tive constituent-context model for improved grammar induction. In Proceedings of the 40th Annual Meet- ing of the Association for Computational Linguistics, pages 128-135, Philadelphia, July 7-12. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Corpusbased induction of syntactic structure: Models of dependency and constituency",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "478--485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D. Manning. 2004. Corpus- based induction of syntactic structure: Models of de- pendency and constituency. In Proceedings of the 42nd Annual Meeting of the Association for Computa- tional Linguistics, pages 478-485, Philadelphia, July 21-26. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "The Unsupervised Learning of Natural Language Structure",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein. 2005. The Unsupervised Learning of Natural Language Structure. Ph.D. thesis, Stanford Univer- sity.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A solution to plato's problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge",
"authors": [
{
"first": "K",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dumais",
"suffix": ""
}
],
"year": 1997,
"venue": "Psychological Review",
"volume": "104",
"issue": "",
"pages": "211--240",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas K. Landauer and Susan T. Dumais. 1997. A so- lution to plato's problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104:211-240.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Introduction to latent semantic analysis",
"authors": [
{
"first": ".",
"middle": [
"K"
],
"last": "Thomas",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"W"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "Darrell",
"middle": [],
"last": "Foltz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Laham",
"suffix": ""
}
],
"year": 1998,
"venue": "Discourse Processes",
"volume": "25",
"issue": "",
"pages": "259--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas. K. Landauer, Peter W. Foltz, and Darrell La- ham. 1998. Introduction to latent semantic analysis. Discourse Processes, 25(2&3):259-284.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Lanczos bidiagonalization with partial reorthogonalization",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rasmus",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Larsen",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rasmus M. Larsen. 1998. Lanczos bidiagonalization with partial reorthogonalization. Technical Report DAIMI PB-357, Department of Computer Science, Aarhus University.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Non-projective dependency parsing using spanning tree algorithms",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Kiril",
"middle": [],
"last": "Ribarov",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. Non-projective dependency pars- ing using spanning tree algorithms. In Proceedings of Human Language Technology Conference and Con- ference on Empirical Methods in Natural Language Processing, pages 523-530, Philadelphia, October 6- 8. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Grammatical bigrams",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Paskin",
"suffix": ""
}
],
"year": 2001,
"venue": "Advances in Neural Information Processing Systems",
"volume": "14",
"issue": "",
"pages": "91--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark A. Paskin. 2001. Grammatical bigrams. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Ad- vances in Neural Information Processing Systems 14, pages 91-97. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Testing the projectivity hypothesis",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Pericliev",
"suffix": ""
},
{
"first": "Ilarion",
"middle": [],
"last": "Ilarionov",
"suffix": ""
}
],
"year": 1986,
"venue": "Proceedings of the 11th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "56--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Pericliev and Ilarion Ilarionov. 1986. Testing the projectivity hypothesis. In Proceedings of the 11th International Conference on Computational Linguis- tics, pages 56-58, Morristown, NJ, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Distributional part-of-speech tagging",
"authors": [
{
"first": "Hinrich",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 7th European Association for Computational Linguistics Conference (EACL-95)",
"volume": "",
"issue": "",
"pages": "141--149",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hinrich Sch\u00fctze. 1995. Distributional part-of-speech tagging. In Proceedings of the 7th European As- sociation for Computational Linguistics Conference (EACL-95), pages 141-149, Philadelphia, March 27- 31. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Unsupervised learning of natural languages",
"authors": [
{
"first": "Zach",
"middle": [],
"last": "Solan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Horn",
"suffix": ""
},
{
"first": "Eytan",
"middle": [],
"last": "Ruppin",
"suffix": ""
},
{
"first": "Shimon",
"middle": [],
"last": "Edelman",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "102",
"issue": "",
"pages": "11629--11634",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zach Solan, David Horn, Eytan Ruppin, and Shimon Edelman. 2005. Unsupervised learning of natural lan- guages. Proceedings of the National Academy of Sci- ences, 102:11629-11634.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "ABL: Alignment-based learning",
"authors": [
{
"first": "M",
"middle": [],
"last": "Menno",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Van Zaanen",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 18th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "961--967",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Menno M. van Zaanen. 2000. ABL: Alignment-based learning. In Proceedings of the 18th International Conference on Computational Linguistics, pages 961- 967, Philadelphia, July 31-August 4. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Rules for syntax, vectors for semantics",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Wiemer",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Hastings",
"suffix": ""
},
{
"first": "Iraide",
"middle": [],
"last": "Zipitria",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 23rd Annual Conference of the Cognitive Science Society",
"volume": "",
"issue": "",
"pages": "1112--1117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Wiemer-Hastings and Iraide Zipitria. 2001. Rules for syntax, vectors for semantics. In Proceedings of the 23rd Annual Conference of the Cognitive Science Society, pages 1112-1117, Mahwah, NJ, August 1-4. Erlbaum.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Figure 1: A Dependency Graph",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Distance Between Dependents in WSJ10",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "Right Branching BaselineJohn likes string cheese ROOT",
"uris": null
}
}
}
}