File size: 38,736 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
{
    "paper_id": "W98-0131",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T06:05:49.711304Z"
    },
    "title": "Automatie Extraction of Stochastic Lexicalized Tree Grammars from Treebanks",
    "authors": [
        {
            "first": "G\u00fcnter",
            "middle": [],
            "last": "Neumann",
            "suffix": "",
            "affiliation": {},
            "email": "[email protected]"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We present a method for the extraction of stochastic lexicalized tree grammars (S-LTG) of different complexities from existing treebanks, which allows us to analyze the relationship of a grammar automatically induced from a treebank wrt. its size, its complexity, and its predictive power on unseen data. Processing of different S-LTG is performed by a stochastic version of the two-step Early-based parsing strategy introduced in (Schabes and Joshi, 1991).",
    "pdf_parse": {
        "paper_id": "W98-0131",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We present a method for the extraction of stochastic lexicalized tree grammars (S-LTG) of different complexities from existing treebanks, which allows us to analyze the relationship of a grammar automatically induced from a treebank wrt. its size, its complexity, and its predictive power on unseen data. Processing of different S-LTG is performed by a stochastic version of the two-step Early-based parsing strategy introduced in (Schabes and Joshi, 1991).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "In this paper we present a method for the extraction of stochastic lexicalized tree grammars (S-LTG) of different complexities from existing treebanks, which allows us to analyze the relationship of a grammar automatically induced from a treebank wrt . its size, its complexity, and its predictive power on unseen data. The use of S-LTGs is motivated for two reasons. First, it is assumed that S-LTG better capture distributional and hierarchical information than stochastic CFG (cf. (Schabes, 1992; Schabes and \\Vaters, 1996) ), and second, they allow the factorization of recursion of different kinds, viz. extraction of left, right, and wrapping auxiliary trees and possible combinations. Existing treebanks are used because they allow a corpus-based analysis of grammars of realistic size. Processing of different S-LTG is performed by a stochastic version of the two-phase Early-based parsing strategy introduced in (Schabes and Joshi, 1991) .",
                "cite_spans": [
                    {
                        "start": 484,
                        "end": 499,
                        "text": "(Schabes, 1992;",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 500,
                        "end": 526,
                        "text": "Schabes and \\Vaters, 1996)",
                        "ref_id": null
                    },
                    {
                        "start": 921,
                        "end": 946,
                        "text": "(Schabes and Joshi, 1991)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "This abstract describes work in progress. So far, we have concentrated on the automatic extraction of S-LTGs of different kinds (actually S-LTSG, S-LTIG, and S-LTAG). This phase is completed and we will report on first experiments using the Penn-Treebank (Marcus et al., 1993) and Negra, a treebank for German (Skut et al., 1997) . A first version of the two-phase parser is implemented, and we have started first tests concerning its performance .",
                "cite_spans": [
                    {
                        "start": 255,
                        "end": 276,
                        "text": "(Marcus et al., 1993)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 310,
                        "end": 329,
                        "text": "(Skut et al., 1997)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Given a treebank, grammar extraction is the process of decomposing each parse tree into smaller units called subtrees. In our approach, the underlying decomposition operation 1. should yield lexically anchored subtrees, and 2. should be guided by linguistic principles.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Grammar extraction",
                "sec_num": "2"
            },
            {
                "text": "The motivation behind (1) is the observation that in practice stochastic CFG perform worse than nonhierarchical approaches, and that lexicalized tree grammars may be able to capture both distributional and hierarchical information (Schabes and Waters, 1996) . Concerning (2) we want to take advantage of the linguistic principles explicitly or implicitly used to define a treebank. This is motivated by the hypothesis that it will better support the development of on-line or incremental learning strategies (the cutting criteria are less dependent from the quantity and quality of the existing treebank than purely statistically based approaches, see also sec. 5) and that it renders possible a comparison of an induced grammar with a linguistically based competence grammar. Both aspects (but especially the latter one) are of importance because it is possible to apply the same learning strategy also to a treebank computed by some competence grammar, and to investigate methods for combining treebanks and competence grammars (see sec. 6).",
                "cite_spans": [
                    {
                        "start": 231,
                        "end": 257,
                        "text": "(Schabes and Waters, 1996)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Grammar extraction",
                "sec_num": "2"
            },
            {
                "text": "However, in this paper we will focus on the use of existing treebanks using the Penn-Treebank (Marcus et al., 1993) and Negra, a treebank for German (Skut et al., 1997) . First, it is assumed that the treebank comes with a notion of lexical and phrasal head, i.e\" with a kind of head principle (see also (Charniak, 1997) ). In the Negra treebank, head elements are explicitly tagged. For the Penn treebank, the head relation has been determined manually. In case it is not possible to uniquely identify one head element there exists a parameter called DIRECTION which specifies whether the left or right candidate should be selected. Note that by means of this parameter we can also specify whether the resulting grammar should prefer a left or right branching.",
                "cite_spans": [
                    {
                        "start": 94,
                        "end": 115,
                        "text": "(Marcus et al., 1993)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 149,
                        "end": 168,
                        "text": "(Skut et al., 1997)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 304,
                        "end": 320,
                        "text": "(Charniak, 1997)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Grammar extraction",
                "sec_num": "2"
            },
            {
                "text": "Using the head information, each tree from the treebank is decomposed from the top downwards into a set of subtrees, such that each non-terminal non-headed subtree is cut off, and the cutting point is marked for substitution. The same process is then recursively applied to each extracted subtree. Due to the assumed head notion each extracted tree will automatically be lexically anchored (and the path from the lexical anchor to the root can be seen as a head-chain). FUrthermore, every terminal element which is a sister of a node of the head-chain will also remain in the extracted tree. Thus, the yield of the extracted tree might contain several terminal substrings, which gives interesting patterns of word or POS sequences. For each extracted tree a frequency counter is used to compute the probability p(t) of a tree t, after the whole treebank has been processed, such that l:t:root(t)=a p(t) = 1, where a denotes the root labe! of a tree t.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Grammar extraction",
                "sec_num": "2"
            },
            {
                "text": "After a tree has been decomposed completely we obtain a set of lexicalized elementary trees where each nonterminal of the yield is marked for substitution. In a next step the set of elementary trees is divided into a set of initial and auxiliary trees. The set of auxiliary trees is further subdivided into a set of left, right, and wrapping auxiliary trees following (Schabes and Waters, 1995 ) (using special foot note labels, like :lfoot, :rfoot, and :wfoot). Note that the identification of possible auxiliary trees is strongly corpus-driven. Using special foot note labels allows us to trigger carefully the corresponding inference rules. For example, it might be possible to treat the :wfoot labe! as the substitution labe!, which means that we consider the extracted grammar as a S-LTIG, or only highly frequent wrapping auxiiiary trees wiil be wnsidered. It is also possible to treat every foot labe! as the substitution labe!, which means that the extracted grammar only allows substitution.",
                "cite_spans": [
                    {
                        "start": 368,
                        "end": 393,
                        "text": "(Schabes and Waters, 1995",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Grammar extraction",
                "sec_num": "2"
            },
            {
                "text": "The resulting S-LTG will be processed by a twophase stochastic parser along the line of (Schabes and Joshi, 1991) . In a first step the input string is used for retrieving the relevant subset of elementary trees. Note that the yield of an elementary tree might consist of a sequence of lexical elements. Thus in order to support efficient access, the deepest leftmost chain of lexical elements is used as index to an elementary tree. Each such index is stored in a decision tree. The first step is then realized by means of a recursive tree traversal which identifies all (langest) matching substrings of the input string (see also sec. 4). Parsing of lexically triggered trees is performed in the second step using an Earley-based strategy. In order to ease implementation of different strategies, the different parsing operations are expressed as inference rules and controlled by a chart-based agenda strategy along the line of (Shieber et al., 1995) . So far, we have implemented a version for running S-LTIG which is based on (Schabes and Waters, 1995) . The inference rules can be triggered through boolean parameters, which allows flexible hiding of auxiliary trees of different kinds.",
                "cite_spans": [
                    {
                        "start": 88,
                        "end": 113,
                        "text": "(Schabes and Joshi, 1991)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 931,
                        "end": 953,
                        "text": "(Shieber et al., 1995)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 1031,
                        "end": 1057,
                        "text": "(Schabes and Waters, 1995)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Two-phase parsing of S-LTG",
                "sec_num": "3"
            },
            {
                "text": "We will briefty report on first results of our method using the Negra treebank ( 4270 sentences) and the section 02, 03, 04 from the Penn treebank (the first 4270 sentences). In both cases we extracted three different versions of S-LTG (note that no normalization of the treebanks has been performed): (a) lexical anchors are words, (b) lexical anchors are partof-speech, and (c) all terminal elements are substituted by the constant :term, which means that lexical information is ignored. For each grammar we report the number of elementary trees, left, right, and wrapping auxiliary trees. In a second experiment we evaluated the performance of the implemented S-LTIG parser using the extracted Penn treebank with words as lexical anchors. We applied all sentences on the extracted grammar and computed the following average valnes for the first phase: sentence length: 27.54, number of matching snbstrings: 15.93, number of elementary trees: 492.77, number of different root labels: 33.16. The average run-time for each sentence (measnred an a Sun Ultra 2 (200 mhz): 0.0231 sec. In a next step we tested the run-time behaviour of the whole parser on the same input, however ignoring every parse which took langer than 30 sec. (about 20 %). The average run-time for each sentence (exhaustive mode): 6.18 sec. This is promising, since the parser is still not optimized.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "First experiments",
                "sec_num": "4"
            },
            {
                "text": "We also tried first blind tests, but it turned ont that the current considered size of the treebanks is too small to get reliable results on unseen data (randomly selecting 10 % of a treebank for testing; 90 % for training). The reason is that if we consider only words as anchors then we rarely get a complete parse result (around 10 %). If we consider only POS then the number of elementary trees retrieved through the first phase increases causing the current parser prototype to be slow (due to the restricted annotation schema). 1 A better strategy seems to be the use of words only for lexical anchors and POS for all other terminal nodes, or to use only closed-class words as lexical anchors (assuming a head principle based on functional categories). In that case it would also be possible to adapt the strategies described in (Srinivas, 1997) wrt. supertagging in order to reduce the set of retrieved trees before the second phase is called.",
                "cite_spans": [
                    {
                        "start": 835,
                        "end": 851,
                        "text": "(Srinivas, 1997)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "First experiments",
                "sec_num": "4"
            },
            {
                "text": "Here we will discuss alternative approaches for converting treebanks into lexicalized tree grammars, namely the Data-oriented Parsing (DOP) framework (Bad, 1995) and approaches based on applying Explanation-based Learning (EBL) to NL parsing (e.g\" (Samuelsson, 1994; Srinivas, 1997) ).",
                "cite_spans": [
                    {
                        "start": 150,
                        "end": 161,
                        "text": "(Bad, 1995)",
                        "ref_id": null
                    },
                    {
                        "start": 248,
                        "end": 266,
                        "text": "(Samuelsson, 1994;",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 267,
                        "end": 282,
                        "text": "Srinivas, 1997)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "5"
            },
            {
                "text": "The general strategy of our approach is similar to DOP with the notable distinction that in our framework all trees must be lexically anchored and that in addition to substitution, we also consider adjunction and restricted versions of it. In the EBL approach to NL parsing the core idea is to use a competence grammar and a training corpus to construct a treebank. The treebank is then used to obtain a specialized grammar which can be processed much faster than -the original one at the price of a small lass in coverage. Samuelsson (1994) presents a method in which tree decomposition is completely automatized using the information-theoretical concept of entropy, after the whole treebank has been indexed in an and-or tree. This implies that a new grammar has tobe computed if the treebank changes (i.e., reduced incrementallity) and that the generality of the induced subtrees depends much more on the size and variation of the treebank than ours. On the other side, this approach seems to be more sensitive to the distribution of sequences of lexical anchors than our approach, so that we will explore its integration.",
                "cite_spans": [
                    {
                        "start": 524,
                        "end": 541,
                        "text": "Samuelsson (1994)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "5"
            },
            {
                "text": "In (Srinivas, 1997) the application of EBL to parsing of LTAG is presented. The core idea is to generalize the derivation trees generated by an LTAG and to allow for a finite state transducer representation of the set of generealized parses. The POS sequence of a training instance is used as the index to a generalized parse. Generalization wrt. recursion is achieved by introducing the Kleene star into the yield of an auxiliary tree that was part of the training example, which allows generalization about the length of the training sentences. This approach is an important candidate for improvements of our two-phase parser once we have acquired an S-LTAG.",
                "cite_spans": [
                    {
                        "start": 3,
                        "end": 19,
                        "text": "(Srinivas, 1997)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "5"
            },
            {
                "text": "6 Future steps",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "5"
            },
            {
                "text": "The work described here is certainly in its early phase.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "5"
            },
            {
                "text": "The next future steps (partly already started) will be: (1) measuring the coverage of an extracted S-LTG, (2) incremental grammar induction, (3) combination of a competence grammar and a treebank. 1 already applied the same learning strategy on derivation trees obtained from a !arge HPSG-based English grammar in order to speed up parsing of HPSG (extending the work described in (Neumann, 1994) ). Now 1 am exploring methods for merging such an \"HPSG-based\" S-LTG with one extracted from a treebank. The same will also be explored wrt. a competence-based LTAG, like the one which comes with the XTAG system (Daran et al., 1994) .",
                "cite_spans": [
                    {
                        "start": 381,
                        "end": 396,
                        "text": "(Neumann, 1994)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 609,
                        "end": 629,
                        "text": "(Daran et al., 1994)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "5"
            },
            {
                "text": "1 Acknowledgment",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related work",
                "sec_num": "5"
            },
            {
                "text": "Applying the same tcst as dcscribed above on POS, the average number of elementary trecs retrieved is 2292.86, i.e\" the number seems to increase by a factor of 5.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "The research underlying this paper was supported by a research grant from the German Bundesministerium f\u00fcr Bildung, Wissenschaft, Forschung und Technologie (BMBF) to t.he. DFT<T proje.r.t PARADIME, FKZ ITW 9704. 1 would like to thank Tilman Becker for many fruitful discussions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "acknowledgement",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Enriching Linguistics with Statistics: Performance Models of Natural Langu.age",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Bod",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. Bod. 1995. Enriching Linguistics with Statistics: Performance Models of Natural Langu.age. Ph.D. thesis, University of Amsterdam. ILLC Disserta- tion Series 1995-14.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Statistical parsing with a context-free grammar and word statistics",
                "authors": [
                    {
                        "first": "E:",
                        "middle": [],
                        "last": "Charniak",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "AAAI-97",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "E: Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In AAAI-97, Providence, Rhode Island.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Xtag system -a wide coverage grammar for english",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Doran",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Egedi",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Hockey",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Srinivas",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Zeidel",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Proceedings of the 15th International Conference on Computational Linguistics {COLING)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "C. Doran, D. Egedi, B. Hockey, B. Srinivas, and M. Zeidel. 1994. Xtag system -a wide cover- age grammar for english. In Proceedings of the 15th International Conference on Computational Linguistics {COLING), Kyoto, Japan.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Building a large annotated corpus of english: The penn treebank",
                "authors": [
                    {
                        "first": "M",
                        "middle": [
                            "P"
                        ],
                        "last": "Marcus",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Santorini",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [
                            "A"
                        ],
                        "last": "Marcinkiewicz",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Computational Linguistics",
                "volume": "19",
                "issue": "",
                "pages": "313--330",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19:313-330.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Application of explanation based learning for efficient processing of constraint-based grammars",
                "authors": [
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Neumann",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Proceedings of the \u2022 Tenth IEEE Conference on Artifical Intelligence for Applications",
                "volume": "",
                "issue": "",
                "pages": "208--215",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "G. Neumann. 1994. Application of explana- tion based learning for efficient processing of constraint-based grammars. In Proceedings of the \u2022 Tenth IEEE Conference on Artifical Intelligence for Applications, pages 208-215, San Antonio, Texas, March.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Grammar specialization through entropy thresholds",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Samuelsson",
                        "suffix": ""
                    }
                ],
                "year": 1994,
                "venue": "Proceedings of the 32nd Annual Meeting of the Association forComputational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "188--195",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "C. Samuelsson. 1994. Grammar specialization through entropy thresholds. In Proceedings of the 32nd Annual Meeting of the Association forCom- putational Linguistics, pages 188-195.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Parsing with lexicalized tree adjoining grammar",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Schabes",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [
                            "K"
                        ],
                        "last": "Joshi",
                        "suffix": ""
                    }
                ],
                "year": 1991,
                "venue": "Current Issues in Parsing Technology",
                "volume": "",
                "issue": "",
                "pages": "25--48",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Y. Schabes and A. K. Joshi. 1991. Parsing with lexi- calized tree adjoining grammar. In M. Tomita, ed- itor, Current Issues in Parsing Technology, pages 25-48. Kluwer, Boston.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Tree insertion grammar: A cubic-time parsable formalism that lexicalizes context-free grammar without changing the trees produced",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Schabes",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Waters",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Computational Linguistics",
                "volume": "21",
                "issue": "",
                "pages": "479--513",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Y. Schabes and R. Waters. 1995. Tree insertion grammar: A cubic-time parsable formalism that lexicalizes context-free grammar without changing the trees produced. Computational Linguistics, 21:479-513.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Stochastic lexicalized tree-insertion grammar",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Schabes",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Il",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Waters",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Recent Advances in Parsing Technology",
                "volume": "",
                "issue": "",
                "pages": "281--294",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Y. Schabes and Il. Waters. 1996. Stochastic lexi- calized tree-insertion grammar. In H. Bunt and M. Tomita, editors, Recent Advances in Pars- ing Technology, pages 281-294. Kluwer Academic Press, London.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Stochastic lexicalized treeadjoining grammars",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Schabes",
                        "suffix": ""
                    }
                ],
                "year": 1992,
                "venue": "Proceedings of the L/th International Con/erence on Computational Linguistics (COLING)",
                "volume": "",
                "issue": "",
                "pages": "426--432",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Y. Schabes. 1992. Stochastic lexicalized tree- adjoining grammars. In Proceedings of the L/th International Con/erence on Computational Lin- guistics (COLING), pages 426-432, Nantes.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Principles and implementation of deductive parsing",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Shieber",
                        "suffix": ""
                    },
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "Schabes",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Pereira",
                        "suffix": ""
                    }
                ],
                "year": 1995,
                "venue": "Journal of Logic and Computation",
                "volume": "24",
                "issue": "",
                "pages": "3--36",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Shieber, Y. Schabes, and F. Pereira. 1995. Prin- ciples and implementation of deductive parsing. Journal of Logic and Computation, 24:3-36.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "An annotation scheme for free worder order languages",
                "authors": [
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Skut",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Krenn",
                        "suffix": ""
                    },
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Brants",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Uszkoreit",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "5th lntematinnal Conference of Applied Natural Language",
                "volume": "",
                "issue": "",
                "pages": "88--94",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "W. Skut, B. Krenn, T. Brants, and H. Uszkoreit. 1997. An annotation scheme for free worder order languages. In 5th lntematinnal Conference of Ap- plied Natural Language, pages 88-94, Washington, USA, March.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Complexity of Lexical Restrictions and lts Relevance to Pat'tial Parsing",
                "authors": [
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Srinivas",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "B. Srinivas. 1997. Complexity of Lexical Restric- tions and lts Relevance to Pat'tial Parsing. Ph.D. thesis, University of Pennsylvania. IRCS Report 97-10.",
                "links": null
            }
        },
        "ref_entries": {}
    }
}