{ "paper_id": "W09-0103", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T06:44:10.925543Z" }, "title": "How the statistical revolution changes (computational) linguistics", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "", "affiliation": { "laboratory": "", "institution": "Brown University", "location": {} }, "email": "johnson@brown.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "This paper discusses some of the ways that the \"statistical revolution\" has changed and continues to change the relationship between linguistics and computational linguistics. I claim that it is more useful in parsing to make an open world assumption about possible linguistic structures, rather than the closed world assumption usually made in grammar-based approaches to parsing, and I sketch two different ways in which grammar-based approaches might be modified to achieve this. I also describe some of the ways in which probabilistic models are starting to have a significant impact on psycholinguistics and language acquisition. In language acquisition Bayesian techniques may let us empirically evaluate the role of putative universals in universal grammar.", "pdf_parse": { "paper_id": "W09-0103", "_pdf_hash": "", "abstract": [ { "text": "This paper discusses some of the ways that the \"statistical revolution\" has changed and continues to change the relationship between linguistics and computational linguistics. I claim that it is more useful in parsing to make an open world assumption about possible linguistic structures, rather than the closed world assumption usually made in grammar-based approaches to parsing, and I sketch two different ways in which grammar-based approaches might be modified to achieve this. I also describe some of the ways in which probabilistic models are starting to have a significant impact on psycholinguistics and language acquisition. In language acquisition Bayesian techniques may let us empirically evaluate the role of putative universals in universal grammar.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The workshop organizers asked us to write something controversial to stimulate discussion, and I've attempted to do that here. Usually in my papers I try to stick to facts and claims that I can support, but here I have fearlessly and perhaps foolishly gone out on a limb and presented guesses, hunches and opinions. Take them with a grain of salt. Inspired by Wanamaker's well-known quote about advertising, I expect that half of the ideas I'm proposing here are wrong, but I don't know which half. I hope the conference will help me figure that out.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Statistical techniques have revolutionized many scientific fields in the past two decades, including computational linguistics. This paper discusses the impact of this on the relationship between computational linguistics and linguistics. I'm presenting a personal perspective rather than a scien-tific review here, and for this reason I focus on areas I have some experience with. I begin by discussing how the statistical perspective changed my understanding of the relationship between linguistic theory, grammars and parsing, and then go on to describe some of the ways that ideas from statistics and machine learning are starting to have an impact on linguistics today.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Before beginning, I'd like to say something about what I think computational linguistics is. I view computational linguistics as having both a scientific and an engineering side. The engineering side of computational linguistics, often called natural language processing (NLP), is largely concerned with building computational tools that do useful things with language, e.g., machine translation, summarization, question-answering, etc. Like any engineering discipline, natural language processing draws on a variety of different scientific disciplines.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "I think it's fair to say that in the current state of the art, natural language processing draws far more heavily on statistics and machine learning than it does on linguistic theory. For example, one might claim that all an NLP engineer really needs to understand about linguistic theory are (say) the parts of speech (POS). Assuming this is true (I'm not sure it is), would it indicate that there is something wrong with either linguistic theory or computational linguistics? I don't think it does: there's no reason to expect an engineering solution to utilize all the scientific knowledge of a related field. The fact that you can build perfectly good bridges with Newtonian mechanics says nothing about the truth of quantum mechanics.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "I also believe that there is a scientific field of computational linguistics. This scientific field exists not just because computers are incredibly useful for doing linguistics -I expect that computers have revolutionized most fields of sciencebut because it makes sense to think of linguis-tic processes as being essentially computational in nature. If we take computation to be the manipulation of symbols in a meaning-respecting way, then it seems reasonable to hypothesize that language comprehension, production and acquisition are all computational processes. Viewed this way, we might expect computational linguistics to interact most strongly with those areas of linguistics that study linguistic processing, namely psycholinguistics and language acquisition. As I explain in section 3 below, I think we are starting to see this happen.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In some ways the 1980s were a golden age for collaboration and cross-fertilization between linguistic theory and computational linguistics, especially between syntax and parsing. Gazdar and colleagues showed that Chomskyian transformations could be supplanted by computationally much simpler feature passing mechanisms (Gazdar et al., 1985) , and this lead to an explosion of work on \"unification-based\" grammars (Shieber, 1986) , including the Lexical-Functional Grammars and Head-driven Phrase Structure Grammars that are still very actively pursued today. I'll call the work on parsing within this general framework the grammar-based approach in order to contrast it with the statistical approach that doesn't rely on these kinds of grammars. I think the statistical approach has come to dominate computational linguistics, and in this section I'll describe why this happened.", "cite_spans": [ { "start": 319, "end": 340, "text": "(Gazdar et al., 1985)", "ref_id": "BIBREF12" }, { "start": 413, "end": 428, "text": "(Shieber, 1986)", "ref_id": "BIBREF33" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "Before beginning I think it's useful to clarify our goals for building parsers. There are many reasons why one might build any computational system -perhaps it's a part of a commercial product we hope will make us rich, or perhaps we want to test the predictions of a certain theory of processing -and these reasons should dictate how and even whether the system is constructed. I'm assuming in this section that we want to build parsers because we expect the representations they produce will be useful for various other NLP engineering tasks. This means that parser design is itself essentially an engineering task, i.e., we want a device that returns parses that are accurate as possible for as many sentences as possible.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "I'll begin by discussing a couple of differences between the approaches that are often mentioned but I don't think are really that impor-tant. The grammar-based approaches are sometimes described as producing deeper representations that are closer to meaning. It certainly is true that grammar-based analyses typically represent predicate-argument structure and perhaps also quantifier scope. But one can recover predicateargument structure using statistical methods (see the work on semantic role labeling and \"Prop-Bank\" parsing (Palmer et al., 2005) ), and presumably similar methods could be used to resolve quantifier scope as well.", "cite_spans": [ { "start": 531, "end": 552, "text": "(Palmer et al., 2005)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "I suspect the main reason why statistical parsing has concentrated on more superficial syntactic structure (such as phrase structure) is because there aren't many actual applications for the syntactic analyses our parsers return. Given the current state-of-the-art in knowledge representation and artificial intelligence, even if we could produce completely accurate logical forms in some higher-order logic, it's not clear whether we could do anything useful with them. It's hard to find real applications that benefit from even syntactic information, and the information any such applications actually use is often fairly superficial. For example, some research systems for named entity detection and extraction use parsing to identify noun phrases (which are potentially name entities) as well as the verbs that govern them, but they ignore the rest of the syntactic structure. In fact, many applications of statistical parsers simply use them as language models, i.e., one parses to obtain the probability that the parser assigns to the string and throws away the parses it computes in the process (Jelinek, 2004) . (It seems that such parsing-based language models are good at preferring strings that are at least superficially grammatical, e.g., where each clause contains one verb phrase, which is useful in applications such as summarization and machine translation).", "cite_spans": [ { "start": 1102, "end": 1117, "text": "(Jelinek, 2004)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "Grammar-based approaches are also often described as more linguistically based, while statistical approaches are viewed as less linguistically informed. I think this view primarily reflects the origins of the two approaches: the grammar-based approach arose from the collaboration between linguists and computer scientists in the 1980s mentioned earlier, while the statistical approach has its origins in engineering work in speech recognition in which linguists did not play a major role. I also think this view is basically false. In the grammar-based approaches lin-guists write the grammars while in statistical approaches linguists annotate the corpora with syntactic parses, so linguists play a central role in both. (It's an interesting question as to why corpus annotation plus statistical inference seems to be a more effective way of getting linguistic information into a computer than manually writing a grammar).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "Rather, I think that computational linguists working on statistical parsing need a greater level of linguistic sensitivity at an informal level than those working on grammar-based approaches. In the grammar-based approaches all linguistic knowledge is contained in the grammar, which the computational linguist implementing the parsing framework doesn't actually have to understand. All she has to do is correctly implement an inference engine for grammars written in the relevant grammar formalism. By contrast, statistical parsers define the probability of a parse in terms of its (statistical) features or properties, and a parser designer needs to choose which features their parser will use, and many of these features reflect at least an intuitive understanding of linguistic dependencies. For example, statistical parsers from Magerman (1995) on use features based on head-dependent relationships. (The parsers developed by the Berkeley group are a notable exception (Petrov and Klein, 2007) ). While it's true that only a small fraction of our knowledge about linguistic structure winds up expressed by features in modern statistical parsers, as discussed above there's no reason to expect all of our scientific knowledge to be relevant to any engineering problem. And while many of the features used in statistical parsers don't correspond to linguistic constraints, nobody seriously claims that humans understand language only using linguistic constraints of the kind expressed in formal grammars. I suspect that many of the features that have been shown to be useful in statistical parsing encode psycholinguistic markedness preferences (e.g., attachment preferences) and at least some aspects of world knowledge (e.g., that the direct object of \"eat\" is likely to be a food).", "cite_spans": [ { "start": 834, "end": 849, "text": "Magerman (1995)", "ref_id": "BIBREF24" }, { "start": 974, "end": 998, "text": "(Petrov and Klein, 2007)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "Moreover, it's not necessary for a statistical model to exactly replicate a linguistic constraint in order for it to effectively capture the corresponding generalization: all that's necessary is that the statistical features \"cover\" the relevant examples. For example, adding a subject-verb agreement fea-ture to the Charniak-Johnson parser (Charniak and Johnson, 2005) has no measurable effect on parsing accuracy. After doing this experiment I realized this shouldn't be surprising: the Charniak parser already conditions each argument's part-ofspeech (POS) on its governor's POS, and since POS tags distinguish singular and plural nouns and verbs, these general head-argument POS features capture most cases of subject-verb agreement.", "cite_spans": [ { "start": 341, "end": 369, "text": "(Charniak and Johnson, 2005)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "Note that I'm not claiming that subject-verb agreement isn't a real linguistic constraint or that it doesn't play an important role in human parsing. I think that the type of input (e.g., treebanks) and the kinds of abilities (e.g., to exactly count the occurences of many different constructions) available to our machines may be so different to what is available to a child that the features that work best in our parsers need not bear much relationship to those used by humans.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "Still, I view the design of the features used in statistical parsers as a fundamentally linguistic issue (albeit one with computational consequences, since the search problem in parsing is largely determined by the features involved), and I expect there is still more to learn about which combinations of features are most useful for statistical parsing. My guess is that the features used in e.g., the Collins (2003) or Charniak (2000) parsers are probably close to optimal for English Penn Treebank parsing (Marcus et al., 1993) , but that other features might improve parsing of other languages or even other English genres. Unfortunately changing the features used in these parsers typically involves significant reprogramming, which makes it difficult for linguists to experiment with new features. However, it might be possible to develop a kind of statistical parsing framework that makes it possible to define new features and integrate them into a statistical parser without any programming which would make it easy to explore novel combinations of statistical features; see Goodman (1998) for an interesting suggestion along these lines.", "cite_spans": [ { "start": 403, "end": 417, "text": "Collins (2003)", "ref_id": "BIBREF9" }, { "start": 421, "end": 436, "text": "Charniak (2000)", "ref_id": "BIBREF6" }, { "start": 509, "end": 530, "text": "(Marcus et al., 1993)", "ref_id": "BIBREF25" }, { "start": 1084, "end": 1098, "text": "Goodman (1998)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "From a high-level perspective, the grammarbased approaches and the statistical approaches both view parsing fundamentally in the same way, namely as a specialized kind of inference problem. These days I view \"parsing as deduction\" (one of the slogans touted by the grammar-based crowd) as unnecessarily restrictive; after all, psycholinguistic research shows that humans are exquisitely sensitive to distributional information, so why shouldn't we let our parsers use that information as well? And as Abney (1997) showed, it is mathematically straight-forward to define probability distributions over the representations used by virtually any theory of grammar (even those of Chomsky's Minimalism), which means that theoretically the arsenal of statistical methods for parsing and learning can be applied to any grammar just as well.", "cite_spans": [ { "start": 501, "end": 513, "text": "Abney (1997)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "In the late 1990s I explored these kinds of statistical models for Lexical-Functional Grammar (Bresnan, 1982; Johnson et al., 1999) . The hope was that statistical features based on LFG's richer representations (specifically, f -structures) might result in better parsing accuracy. However, this seems not to be the case. As mentioned above, Abney's formulation of probabilistic models makes essentially no demands on what linguistic representations actually are; all that is required is that the statistical features are functions that map each representation to a real number. These are used to map a set of linguistic representations (say, the set of all grammatical analyses) to a set of vectors of real numbers. Then by defining a distribution over these sets of real-valued vectors we implicitly define a distribution over the corresponding linguistic representations.", "cite_spans": [ { "start": 94, "end": 109, "text": "(Bresnan, 1982;", "ref_id": "BIBREF4" }, { "start": 110, "end": 131, "text": "Johnson et al., 1999)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "This means that as far as the probabilistic model is concerned the details of the linguistic representations don't actually matter, so long as there are the right number of them and it is possible to compute the necessary real-valued vectors from them. For a computational linguist this is actually quite a liberating point of view; we aren't restricted to slavishly reproducing textbook linguistic structures, but are free to experiment with alternative representations that might have computational or other advantages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "In my case, it turned out that the kinds of features that were most useful for stochastic LFG parsing could in fact be directly computed from phrase-structure trees. The features that involved f -structure properties could be covered by other features defined directly on the phrase-structure trees. (Some of these phrase-structure features were implemented by rather nasty C++ routines but that doesn't matter; Abney-type models make no assumptions about what the feature functions are). This meant that I didn't actually need the f -structures to define the probability distributions I was interested in; all I needed were the corresponding c-structure or phrase-structure trees.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "And of course there are many ways of obtaining phrase-structure trees. At the time my colleague Eugene Charniak was developing a statistical phrase-structure parser that was more robust and had broader coverage than the LFG parser I was working with, and I found I generally got better performance if I used the trees his parser produced, so that's what I did. This leads to the discriminative re-ranking approach developed by Collins and Koo (2005) , in which a statistical parser trained on a treebank is used to produce a set of candidate parses which are then \"re-ranked\" by an Abney-style probabilistic model. I suspect these robustness and coverage problems of grammar-based parsing are symptoms of a fundamental problem in the standard way that grammar-based parsing is understood. First, I think grammar-based approaches face a dilemma: on the one hand the explosion of ambiguity suggests that some sentences get too many parses, while the problems of coverage show that some sentences get too few, i.e., zero, parses. While it's possible that there is a single grammar that can resolve this dilemma, my point here is that each of these problems suggests we need to modify the grammars in exactly the opposite way, i.e., generally tighten the constraints in order to reduce ambiguity, while generally relax the constraints in order to allow more parses for sentences that have no parses at all. Second, I think this dilemma only arises because the grammar-based approach to parsing is fundamentally designed around the goal of distinguishing grammatical from ungrammatical sentences. While I agree with Pullum (2007) that grammaticality is and should be central to syntactic theory, I suspect it is not helpful to view parsing (by machines or humans) as a byproduct of proving the grammaticality of a sentence. In most of the applications I can imagine, what we really want from a parser is the parse that reflects its best guess at the intended interpretation of the input, even if that input is ungrammatical. For example, given the telegraphese input \"man bites dog\" we want the parser to tell us that \"man\" is likely to be the agent of \"bites\" and \"dog\" the patient, and not simply that the sentence is ungrammatical.", "cite_spans": [ { "start": 427, "end": 449, "text": "Collins and Koo (2005)", "ref_id": "BIBREF8" }, { "start": 1611, "end": 1624, "text": "Pullum (2007)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "These grammars typically distinguish grammatical from ungrammatical analyses by explicitly characterizing the set of grammatical analyses in some way, and then assuming that all other analyses are ungrammatical. Borrowing terminology from logic programming (Lloyd, 1987) we might call this a closed-world assumption: any analysis the grammar does not generate is assumed to be ungrammatical.", "cite_spans": [ { "start": 257, "end": 270, "text": "(Lloyd, 1987)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "Interestingly, I think that the probabilistic models used statistical parsing generally make an open-world assumption about linguistic analyses. These probabilistic models prefer certain linguistic structures over others, but the smoothing mechanisms that these methods use ensure that every possible analysis (and hence every possible string) receives positive probability. In such an approach the statistical features identify properties of syntactic analyses which make the analysis more or less likely, so the probabilistic model can prefer, disprefer or simply be ambivalent about any particular linguistic feature or construction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "I think an open-world assumption is generally preferable as a model of syntactic parsing in both humans and machines. I think it's not reasonable to assume that the parser knows all the lexical entries and syntactic constructions of the language it is parsing. Even if the parser encounters a word or construction it doesn't understand it, that shouldn't stop it from interpreting the rest of the sentence. Statistical parsers are considerably more open-world. For example, unknown words don't present any fundamental problem for statistical parsers; in the absence of specific lexical information about a word they automatically back off to generic information about words in general.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "Does the closed-world assumption inherent in the standard approach to grammar-based parsing mean we have to abandon it? I don't think so; I can imagine at least two ways in which the conventional grammar-based approach might be modified to obtain an open-world parsing model. One possible approach keeps the standard closed-world conception that grammars generate only grammatical analyses, but gives up the idea that parsing is a byproduct of determining the grammaticality of the input sentence. Instead, we might use a noisy channel to map grammatical analyses generated by the grammar to the actual input sentences we have to parse. Parsing involves recovering the grammatical source or underlying sentence as well as its structure. Presumably the channel model would be designed to prefer min-imal distortion, so if the input to be parsed is in fact grammatical then the channel would prefer the identity transformation, while if the input is ungrammatical the channel model would map it to close grammatical sentences. For example, if such a parser were given the input \"man bites dog\" it might decide that the most probable underlying sentence is \"a man bites a dog\" and return a parse for that sentence. Such an approach might be regarded as a way of formalizing the idea that ungrammatical sentences are interpreted by analogy with grammatical ones. (Charniak and I proposed a noisy channel model along these lines for parsing transcribed speech (Johnson and Charniak, 2004) ).", "cite_spans": [ { "start": 1455, "end": 1483, "text": "(Johnson and Charniak, 2004)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "Another possible approach involves modifying our interpretation of the grammar itself. We could obtain an open world model by relaxing our interpretation of some or all of the constraints in the grammar. Instead of viewing them as hard constraints that define a set of grammatical constructions, we reinterpret them as violable, probabilistic features. For example, instead of interpreting subject-verb agreement as a hard constraint that rules out certain syntactic analyses, we reinterpret it as a soft constraint that penalizes analyses in which subject-verb agreement fails. Instead of assuming that each verb comes with a fixed set of subcategorization requirements, we might view subcategorization as preferences for certain kinds of complements, implemented by features in an Abney-style statistical model. Unknown words come with no subcategorization preferences of their own, so they would inherit the prior or default preferences. Formally, I think this is fairly easy to achieve: we replace the hard unification constraints (e.g., that the subject's number feature equals the verb's number feature) with a stochastic feature that fires whenever the subject's number feature differs from the verb's number feature, and rely on the statistical model training procedure to estimate that feature's weight.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "Computationally, I suspect that either of these options (or any other option that makes the grammar-based approaches open world) will require a major rethinking of the parsing process. Notice that both approaches let ambiguity proliferate (ambiguity is our friend in the fight against poor coverage), so we would need parsing algorithms capable of handling massive ambiguity. This is true of most statistical parsing models, so it is possible that the same approaches that have proven successful in statistical parsing (e.g., using probabilities to guide search, dynamic programming, coarse-to-fine) will be useful here as well.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Grammar-based and statistical parsing", "sec_num": "2" }, { "text": "The previous section focused on syntactic parsing, which is an area in which there's been a fruitful interaction between linguistic theory and computational linguistics over a period of several decades. In this section I want to discuss two other emerging areas in which I expect the interaction between linguistics and computational linguistics to become increasingly important: psycholinguistics and language acquisition. I think it's no accident that these areas both study processing (rather than an area of theoretical linguistics such as syntax or semantics), since I believe that the scientific side of computational linguistics is fundamentally about such linguistic processes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical models and linguistics", "sec_num": "3" }, { "text": "Just to be clear: psycholinguistics and language acquisition are experimental disciplines, and I don't expect the average researcher in those fields to start doing computational linguistics any time soon. However, I do think there are an emerging cadre of young researchers in both fields applying ideas and results from computational linguistics in their work and using experimental results from their field to develop and improve the computational models. For example, in psycholinguistics researchers such as Hale (2006) and Levy (2008) are using probabilistic models of syntactic structure to make predictions about human sentence processing, and Bachrach (2008) is using predictions from the Roark (2001) parser to help explain the patterns of fMRI activation observed during sentence comprehension. In the field of language acquisition computational linguists such as Klein and Manning (2004) have studied the unsupervised acquisition of syntactic structure, while linguists such as Boersma and Hayes (2001) , Goldsmith (2001) , Pater (2008) and Albright and Hayes (2003) are developing probabilistic models of the acquisition of phonology and/or morphology, and Frank et al. (2007) experimentally tests the predictions of a Bayesian model of lexical acquisition. Since I have more experience with computational models of language acquisition, I'll concentrate on this topic for the rest of this section.", "cite_spans": [ { "start": 512, "end": 523, "text": "Hale (2006)", "ref_id": "BIBREF16" }, { "start": 528, "end": 539, "text": "Levy (2008)", "ref_id": "BIBREF22" }, { "start": 651, "end": 666, "text": "Bachrach (2008)", "ref_id": "BIBREF2" }, { "start": 697, "end": 709, "text": "Roark (2001)", "ref_id": "BIBREF32" }, { "start": 874, "end": 898, "text": "Klein and Manning (2004)", "ref_id": "BIBREF21" }, { "start": 989, "end": 1013, "text": "Boersma and Hayes (2001)", "ref_id": "BIBREF3" }, { "start": 1016, "end": 1032, "text": "Goldsmith (2001)", "ref_id": "BIBREF13" }, { "start": 1035, "end": 1047, "text": "Pater (2008)", "ref_id": "BIBREF28" }, { "start": 1052, "end": 1077, "text": "Albright and Hayes (2003)", "ref_id": "BIBREF1" }, { "start": 1169, "end": 1188, "text": "Frank et al. (2007)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical models and linguistics", "sec_num": "3" }, { "text": "Much of this work can be viewed under the slogan \"structured statistical learning\". That is, spec-ifying the structures over which the learning algorithm generalizes is just as important as specifying the learning algorithm itself. One of the things I like about this work is that it gets beyond the naive nature-versus-nurture arguments that characterize some of the earlier theoretical work on language acquisition. Instead, these computational models become tools for investigating the effect of specific structural assumptions on the acquisition process. For example, shows that modeling inter-word dependencies improves word segmentation, which shows that the linguistic context contains information that is potentially very useful for lexical acquisition.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical models and linguistics", "sec_num": "3" }, { "text": "I think it's no accident that much of the computational work is concerned with phonology and morphology. These fields seem to be closer to the data and the structures involved seem simpler than in, say, syntax and semantics. I suspect that linguists working in phonology and morphology find it easier to understand and accept probabilistic models in large part because of Smolensky's work on Optimality Theory (Smolensky and Legendre, 2005) . Smolensky found a way of introducing optimization into linguistic theory in a way that linguists could understand, and this serves as a very important bridge for them to probabilistic models.", "cite_spans": [ { "start": 410, "end": 440, "text": "(Smolensky and Legendre, 2005)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical models and linguistics", "sec_num": "3" }, { "text": "As I argued above, it's important with any computational modeling to be clear about exactly what our computational models are intended to achieve. Perhaps the most straight-forward goal for computational models of language acquisition is to view them as specifying the actual computations that a human performs when learning a language. Under this conception we expect the computational model to describe the learning trajectory of language acquisition, e.g., if it takes the algorithm more iterations to learn one word than another, then we would expect humans to take longer to that word as well. Much of the work in computational phonology seems to take this perspective (Boersma and Hayes, 2001 ).", "cite_spans": [ { "start": 674, "end": 698, "text": "(Boersma and Hayes, 2001", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical models and linguistics", "sec_num": "3" }, { "text": "Alternatively, we might view our probabilistic models (rather than the computational procedures that implementing them) as embodying the scientific claims we want to make. Because these probabilistic models are too complex to analyze analytically in general we need a computational procedure to compute the model's predictions, but the computational procedure itself is not claimed to have any psychological reality. For example, we might claim that the grammar a child will learn is the one that is optimal with respect to a certain probabilistic model. We need an algorithm for computing this optimal grammar so we can check the probabilistic model's predictions and to convince ourselves we're not expecting the learner to perform magic, but we might not want to claim that humans use this algorithm. To use terminology from the grammar-based approaches mentioned earlier, a probabilistic model is a declarative specification of the distribution of certain variables, but it says nothing about how this distribution might actually be calculated. I think Marr's \"three levels\" capture this difference nicely: the question is whether we take our models to be \"algorithmic level\" or \"computational level\" descriptions of cognitive processes (Marr, 1982) .", "cite_spans": [ { "start": 1241, "end": 1253, "text": "(Marr, 1982)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Statistical models and linguistics", "sec_num": "3" }, { "text": "Looking into the future, I'm very excited about Bayesian approaches to language acquisition, as I think they have the potential to let us finally examine deep questions about language acquisition in a quantitative way. The Bayesian approach factors learning problems into two pieces: the likelihood and the prior. The likelihood encodes the information obtained from the data, while the prior encodes the information possessed by the learner before learning commences (Pearl, 1988) . In principle the prior can encode virtually any information, including information claimed to be part of universal grammar.", "cite_spans": [ { "start": 468, "end": 481, "text": "(Pearl, 1988)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical models and linguistics", "sec_num": "3" }, { "text": "Bayesian priors can incorporate the properties linguists often take to be part of universal grammar, such as X \u2032 theory. A Bayesian prior can also express soft markedness preferences as well as hard constraints. Moreover, the prior can also incorporate preferences that are not specifically linguistic, such as a preference for shorter grammars or smaller lexicons, i.e., the kinds of preferences sometimes expressed by an evaluation metric (Chomsky, 1965) .", "cite_spans": [ { "start": 441, "end": 456, "text": "(Chomsky, 1965)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Statistical models and linguistics", "sec_num": "3" }, { "text": "The Bayesian framework therefore provides us with a tool to quantitatively evaluate the impact of different purported linguistic universals on language acquisition. For example, we can calculate the contribution of, say, hypothetical X \u2032 theory universals on the acquisition of syntax. The Bayesian framework is flexible enough to also permit us to evaluate the contribution of the nonlinguistic context on learning (Frank et al., to appear) . Finally, non-parametric Bayesian methods permit us to learn models with an unbounded num-ber features, perhaps giving us the mathematical and computational tools to understand the induction of rules and complex structure .", "cite_spans": [ { "start": 416, "end": 441, "text": "(Frank et al., to appear)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Statistical models and linguistics", "sec_num": "3" }, { "text": "Of course doing this requires developing actual Bayesian models of language, and this is not easy. Even though this research is still just beginning, it's clear that the details of the models have a huge impact on how well they work. It's not enough to \"assume some version of X \u2032 theory\"; one needs to evaluate specific proposals. Still, my hope is that being able to evaluate the contributions of specific putative universals may help us measure and understand their contributions (if any) to the learning process.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical models and linguistics", "sec_num": "3" }, { "text": "In this paper I focused on two areas of interaction between computational linguistics and linguistic theory. In the area of parsing I argued that we should design parsers so they incorporate an openworld assumption about sentences and their linguistic structures and sketched two ways in which grammar-based approaches might be modified to make them do this; both of which involve abandoning the idea that parsing is solely a process of proving the grammaticality of the input.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "Then I discussed how probabilistic models are being applied in the fields of sentence processing and language acquisition. Here I believe we're at the beginning of a very fruitful period of interaction between empirical research and computational modeling, with insights and results flowing both ways.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "But what does all this mean for mainstream computational linguistics? Can we expect theoretical linguistics to play a larger role in computational linguistics in the near future? If by computational linguistics we mean the NLP engineering applications that typically receive the bulk of the attention at today's Computational Linguistics conferences, I'm not so sure. While it's reasonable to expect that better scientific theories of how humans understand language will help us build better computational systems that do the same, I think we should remember that our machines can do things that no human can (e.g., count all the 5-grams in terabytes of data), and so our engineering solutions may differ considerably from the algorithms and procedures used by humans. But I think it's also reasonable to hope that the interdisciplinary work involving statistics, computational models, psycholinguistics and language acquisition that I mentioned in the paper will produce new insights into how language is acquired and used.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" } ], "back_matter": [ { "text": "I'd like to thank Eugene Charniak and Antske Fokkens for stimulating discussion and helpful comments on an earlier draft. Of course all opinions expressed here are my own.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Stochastic Attribute-Value Grammars", "authors": [ { "first": "Steven", "middle": [], "last": "Abney", "suffix": "" } ], "year": 1997, "venue": "Computational Linguistics", "volume": "23", "issue": "4", "pages": "597--617", "other_ids": {}, "num": null, "urls": [], "raw_text": "Steven Abney. 1997. Stochastic Attribute-Value Grammars. Computational Linguistics, 23(4):597-617.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Rules vs. analogy in English past tenses: a computational/experimental study", "authors": [ { "first": "A", "middle": [], "last": "Albright", "suffix": "" }, { "first": "B", "middle": [], "last": "Hayes", "suffix": "" } ], "year": 2003, "venue": "Cognition", "volume": "90", "issue": "", "pages": "118--161", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Albright and B. Hayes. 2003. Rules vs. analogy in English past tenses: a computational/experimental study. Cognition, 90:118-161.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Imaging Neural Correlates of Syntactic Complexity in a Naturalistic Context", "authors": [ { "first": "Asaf", "middle": [], "last": "Bachrach", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asaf Bachrach. 2008. Imaging Neural Correlates of Syn- tactic Complexity in a Naturalistic Context. Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, Mas- sachusetts.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Empirical tests of the gradual learning algorithm", "authors": [ { "first": "P", "middle": [], "last": "Boersma", "suffix": "" }, { "first": "B", "middle": [], "last": "Hayes", "suffix": "" } ], "year": 2001, "venue": "Linguistic Inquiry", "volume": "32", "issue": "1", "pages": "45--86", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Boersma and B. Hayes. 2001. Empirical tests of the grad- ual learning algorithm. Linguistic Inquiry, 32(1):45-86.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The Mental Representation of Grammatical Relations", "authors": [ { "first": "Joan", "middle": [], "last": "Bresnan", "suffix": "" } ], "year": 1982, "venue": "", "volume": "", "issue": "", "pages": "282--390", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joan Bresnan. 1982. Control and complementation. In Joan Bresnan, editor, The Mental Representation of Grammati- cal Relations, pages 282-390. The MIT Press, Cambridge, Massachusetts.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Coarse-to-fine n-best parsing and MaxEnt discriminative reranking", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak and Mark Johnson. 2005. Coarse-to-fine n-best parsing and MaxEnt discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Associa- tion for Computational Linguistics, pages 173-180, Ann Arbor, Michigan, June. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A maximum-entropy-inspired parser", "authors": [ { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2000, "venue": "The Proceedings of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "132--139", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eugene Charniak. 2000. A maximum-entropy-inspired parser. In The Proceedings of the North American Chapter of the Association for Computational Linguistics, pages 132-139.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Aspects of the Theory of Syntax", "authors": [ { "first": "Noam", "middle": [], "last": "Chomsky", "suffix": "" } ], "year": 1965, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Noam Chomsky. 1965. Aspects of the Theory of Syntax. The MIT Press, Cambridge, Massachusetts.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Discriminative reranking for natural language parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Terry", "middle": [], "last": "Koo", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "1", "pages": "25--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25-70.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Head-driven statistical models for natural language parsing", "authors": [ { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2003, "venue": "Computational Linguistics", "volume": "29", "issue": "4", "pages": "589--638", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational Linguistics, 29(4):589-638.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Modeling human performance on statistical word segmentation tasks", "authors": [ { "first": "Michael", "middle": [ "C" ], "last": "Frank", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Vikash", "middle": [], "last": "Mansinghka", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Griffiths", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Tenenbaum", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 29th Annual Meeting of the Cognitive Science Society", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael C. Frank, Sharon Goldwater, Vikash Mansinghka, Tom Griffiths, and Joshua Tenenbaum. 2007. Model- ing human performance on statistical word segmentation tasks. In Proceedings of the 29th Annual Meeting of the Cognitive Science Society.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Using speakers referential intentions to model early cross-situational word learning", "authors": [ { "first": "Michael", "middle": [ "C" ], "last": "Frank", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Goodman", "suffix": "" }, { "first": "Joshua", "middle": [], "last": "Tenenbaum", "suffix": "" } ], "year": null, "venue": "Psychological Science", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michael C. Frank, Noah Goodman, and Joshua Tenenbaum. to appear. Using speakers referential intentions to model early cross-situational word learning. Psychological Sci- ence.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Generalized Phrase Structure Grammar", "authors": [ { "first": "Gerald", "middle": [], "last": "Gazdar", "suffix": "" }, { "first": "Ewan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Pullum", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Sag", "suffix": "" } ], "year": 1985, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gerald Gazdar, Ewan Klein, Geoffrey Pullum, and Ivan Sag. 1985. Generalized Phrase Structure Grammar. Basil Blackwell, Oxford.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Unsupervised learning of the morphology of a natural language", "authors": [ { "first": "J", "middle": [], "last": "Goldsmith", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "", "pages": "153--198", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Goldsmith. 2001. Unsupervised learning of the morphol- ogy of a natural language. Computational Linguistics, 27:153-198.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Distributional cues to word boundaries: Context is important", "authors": [ { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 31st Annual Boston University Conference on Language Development", "volume": "", "issue": "", "pages": "239--250", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2007. Distributional cues to word boundaries: Context is important. In David Bamman, Tatiana Magnitskaia, and Colleen Zaller, editors, Proceedings of the 31st Annual Boston University Conference on Language Development, pages 239-250, Somerville, MA. Cascadilla Press.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Parsing inside-out", "authors": [ { "first": "J", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Goodman. 1998. Parsing inside-out. Ph.D. thesis, Harvard University. available from http://research.microsoft.com/\u02dcjoshuago/.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Uncertainty about the rest of the sentence", "authors": [ { "first": "John", "middle": [], "last": "Hale", "suffix": "" } ], "year": 2006, "venue": "Cognitive Science", "volume": "30", "issue": "", "pages": "643--672", "other_ids": {}, "num": null, "urls": [], "raw_text": "John Hale. 2006. Uncertainty about the rest of the sentence. Cognitive Science, 30:643-672.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Stochastic analysis of structured language modeling", "authors": [ { "first": "Fred", "middle": [], "last": "Jelinek", "suffix": "" } ], "year": 2004, "venue": "Mathematical Foundations of Speech and Language Processing", "volume": "", "issue": "", "pages": "37--72", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fred Jelinek. 2004. Stochastic analysis of structured lan- guage modeling. In Mark Johnson, Sanjeev P. Khudan- pur, Mari Ostendorf, and Roni Rosenfeld, editors, Mathe- matical Foundations of Speech and Language Processing, pages 37-72. Springer, New York.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A TAG-based noisy channel model of speech repairs", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Charniak", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "33--39", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson and Eugene Charniak. 2004. A TAG-based noisy channel model of speech repairs. In Proceedings of the 42nd Annual Meeting of the Association for Computa- tional Linguistics, pages 33-39.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Estimators for stochastic \"unification-based\" grammars", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Stuart", "middle": [], "last": "Geman", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Canon", "suffix": "" }, { "first": "Zhiyi", "middle": [], "last": "Chi", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Riezler", "suffix": "" } ], "year": 1999, "venue": "The Proceedings of the 37th Annual Conference of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "535--541", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson, Stuart Geman, Stephen Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic \"unification-based\" grammars. In The Proceedings of the 37th Annual Conference of the Association for Computa- tional Linguistics, pages 535-541, San Francisco. Morgan Kaufmann.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Adaptor Grammars: A framework for specifying compositional nonparametric Bayesian models", "authors": [ { "first": "Mark", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Thomas", "middle": [ "L" ], "last": "Griffiths", "suffix": "" }, { "first": "Sharon", "middle": [], "last": "Goldwater", "suffix": "" } ], "year": 2007, "venue": "Advances in Neural Information Processing Systems 19", "volume": "", "issue": "", "pages": "641--648", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark Johnson, Thomas L. Griffiths, and Sharon Goldwa- ter. 2007. Adaptor Grammars: A framework for speci- fying compositional nonparametric Bayesian models. In B. Sch\u00f6lkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19, pages 641- 648. MIT Press, Cambridge, MA.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Corpus-based induction of syntactic structure: Models of dependency and constituency", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "478--485", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Chris Manning. 2004. Corpus-based in- duction of syntactic structure: Models of dependency and constituency. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 478-485.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Expectation-based syntactic comprehension", "authors": [ { "first": "Roger", "middle": [], "last": "Levy", "suffix": "" } ], "year": 2008, "venue": "Cognition", "volume": "106", "issue": "", "pages": "1126--1177", "other_ids": {}, "num": null, "urls": [], "raw_text": "Roger Levy. 2008. Expectation-based syntactic comprehen- sion. Cognition, 106:1126-1177.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Foundations of Logic Programming", "authors": [ { "first": "John", "middle": [ "W" ], "last": "Lloyd", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "John W. Lloyd. 1987. Foundations of Logic Programming. Springer, Berlin, 2 edition.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Statistical decision-tree models for parsing", "authors": [ { "first": "David", "middle": [ "M" ], "last": "Magerman", "suffix": "" } ], "year": 1995, "venue": "The Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "276--283", "other_ids": {}, "num": null, "urls": [], "raw_text": "David M. Magerman. 1995. Statistical decision-tree mod- els for parsing. In The Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 276-283, San Francisco. The Association for Com- putational Linguistics, Morgan Kaufman.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Building a large annotated corpus of English: The Penn Treebank", "authors": [ { "first": "Michell", "middle": [ "P" ], "last": "Marcus", "suffix": "" }, { "first": "Beatrice", "middle": [], "last": "Santorini", "suffix": "" }, { "first": "Mary", "middle": [ "Ann" ], "last": "Marcinkiewicz", "suffix": "" } ], "year": 1993, "venue": "Computational Linguistics", "volume": "19", "issue": "2", "pages": "313--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguis- tics, 19(2):313-330.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "The Proposition Bank: An annotated corpus of semantic roles", "authors": [ { "first": "Matha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gildea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Kingsbury", "suffix": "" } ], "year": 2005, "venue": "Computational Linguistics", "volume": "31", "issue": "1", "pages": "71--106", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Gradual learning and convergence. Linguistic Inquiry", "authors": [ { "first": "Joe", "middle": [], "last": "Pater", "suffix": "" } ], "year": 2008, "venue": "", "volume": "30", "issue": "", "pages": "334--345", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joe Pater. 2008. Gradual learning and convergence. Linguis- tic Inquiry, 30(2):334-345.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Probabalistic Reasoning in Intelligent Systems: Networks of Plausible Inference", "authors": [ { "first": "Judea", "middle": [], "last": "Pearl", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Judea Pearl. 1988. Probabalistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kauf- mann, San Mateo, California.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Improved inference for unlexicalized parsing", "authors": [ { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2007, "venue": "Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceed- ings of the Main Conference, pages 404-411, Rochester, New York, April. Association for Computational Linguis- tics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Ungrammaticality, rarity, and corpus use", "authors": [ { "first": "Geoffrey", "middle": [ "K" ], "last": "Pullum", "suffix": "" } ], "year": 2007, "venue": "Corpus Linguistics and Linguistic Theory", "volume": "3", "issue": "", "pages": "33--47", "other_ids": {}, "num": null, "urls": [], "raw_text": "Geoffrey K. Pullum. 2007. Ungrammaticality, rarity, and corpus use. Corpus Linguistics and Linguistic Theory, 3:33-47.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Probabilistic top-down parsing and language modeling", "authors": [ { "first": "Brian", "middle": [], "last": "Roark", "suffix": "" } ], "year": 2001, "venue": "Computational Linguistics", "volume": "27", "issue": "2", "pages": "249--276", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brian Roark. 2001. Probabilistic top-down parsing and lan- guage modeling. Computational Linguistics, 27(2):249- 276.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "An Introduction to Unificationbased Approaches to Grammar. CSLI Lecture Notes Series", "authors": [ { "first": "M", "middle": [], "last": "Stuart", "suffix": "" }, { "first": "", "middle": [], "last": "Shieber", "suffix": "" } ], "year": 1986, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stuart M. Shieber. 1986. An Introduction to Unification- based Approaches to Grammar. CSLI Lecture Notes Se- ries. Chicago University Press, Chicago.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "The Harmonic Mind: From Neural Computation To Optimality-Theoretic Grammar", "authors": [ { "first": "Paul", "middle": [], "last": "Smolensky", "suffix": "" }, { "first": "G\u00e9raldine", "middle": [], "last": "Legendre", "suffix": "" } ], "year": 2005, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Paul Smolensky and G\u00e9raldine Legendre. 2005. The Har- monic Mind: From Neural Computation To Optimality- Theoretic Grammar. The MIT Press.", "links": null } }, "ref_entries": {} } }