{
"paper_id": "W10-0406",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T05:05:55.155524Z"
},
"title": "Learning Simple Wikipedia: A Cogitation in Ascertaining Abecedarian Language",
"authors": [
{
"first": "Courtney",
"middle": [],
"last": "Napoles",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21211",
"region": "MD"
}
},
"email": "courtneyn@jhu.edu"
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Johns Hopkins University Baltimore",
"location": {
"postCode": "21211",
"region": "MD"
}
},
"email": "mdredze@cs.jhu.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Text simplification is the process of changing vocabulary and grammatical structure to create a more accessible version of the text while maintaining the underlying information and content. Automated tools for text simplification are a practical way to make large corpora of text accessible to a wider audience lacking high levels of fluency in the corpus language. In this work, we investigate the potential of Simple Wikipedia to assist automatic text simplification by building a statistical classification system that discriminates simple English from ordinary English. Most text simplification systems are based on handwritten rules (e.g., PEST (Carroll et al., 1999) and its module SYSTAR (Canning et al., 2000)), and therefore face limitations scaling and transferring across domains. The potential for using Simple Wikipedia for text simplification is significant; it contains nearly 60,000 articles with revision histories and aligned articles to ordinary English Wikipedia. Using articles from Simple Wikipedia and ordinary Wikipedia, we evaluated different classifiers and feature sets to identify the most discriminative features of simple English for use across domains. These findings help further understanding of what makes text simple and can be applied as a tool to help writers craft simple text.",
"pdf_parse": {
"paper_id": "W10-0406",
"_pdf_hash": "",
"abstract": [
{
"text": "Text simplification is the process of changing vocabulary and grammatical structure to create a more accessible version of the text while maintaining the underlying information and content. Automated tools for text simplification are a practical way to make large corpora of text accessible to a wider audience lacking high levels of fluency in the corpus language. In this work, we investigate the potential of Simple Wikipedia to assist automatic text simplification by building a statistical classification system that discriminates simple English from ordinary English. Most text simplification systems are based on handwritten rules (e.g., PEST (Carroll et al., 1999) and its module SYSTAR (Canning et al., 2000)), and therefore face limitations scaling and transferring across domains. The potential for using Simple Wikipedia for text simplification is significant; it contains nearly 60,000 articles with revision histories and aligned articles to ordinary English Wikipedia. Using articles from Simple Wikipedia and ordinary Wikipedia, we evaluated different classifiers and feature sets to identify the most discriminative features of simple English for use across domains. These findings help further understanding of what makes text simple and can be applied as a tool to help writers craft simple text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The availability of large collections of electronic texts is a boon to information seekers, however, advanced texts often require fluency in the language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Text simplification (TS) is an emerging area of textto-text generation that focuses on increasing the readability of a given text. Potential applications can increase the accessibility of text, which has great value in education, public health, and safety, and can aid natural language processing tasks such as machine translation and text generation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Corresponding to these applications, TS can be broken down into two rough categories depending on the target \"reader.\" The first type of TS aims to increase human readability for people lacking highlevel language skills, either because of age, education level, unfamiliarity with the language, or disability. Historically, generating this text has been done by hand, which is time consuming and expensive, especially when dealing with material that requires expertise, such as legal documents. Most current automatic TS systems rely on handwritten rules, e.g., PEST (Carroll et al., 1999) , its SYSTAR module (Canning et al., 2000) , and the method described by Siddharthan (2006) . Systems using handwritten rules can be susceptible to changes in domains and need to be modified for each new domain or language. There has been some research into automatically learning the rules for simplifying text using aligned corpora (Daelemans et al., 2004; Yatskar et al., 2010) , but these have yet to match the performance hand-crafted rule systems. An example of a manually simplified sentence can be found in table 1.",
"cite_spans": [
{
"start": 566,
"end": 588,
"text": "(Carroll et al., 1999)",
"ref_id": "BIBREF7"
},
{
"start": 609,
"end": 631,
"text": "(Canning et al., 2000)",
"ref_id": "BIBREF6"
},
{
"start": 662,
"end": 680,
"text": "Siddharthan (2006)",
"ref_id": null
},
{
"start": 923,
"end": 947,
"text": "(Daelemans et al., 2004;",
"ref_id": "BIBREF11"
},
{
"start": 948,
"end": 969,
"text": "Yatskar et al., 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The second type of TS has the goal of increasing the machine readability of text to aid tasks such as information extraction, machine translation, generative summarization, and other text generation tasks for selecting and evaluating the best candidate output text. In machine translation, the evaluation tool most commonly used for evaluating output, the BLEU score (Papineni et al., 2001 ), rates the \"goodness\" of output based on n-gram overlap with human-generated text. However this metric has been criticized for not accurately measuring the fluency of text and there is active research into other metrics (Callison-Burch et al., 2006; Ye et al., 2007) . Previous studies suggest that text simplified for machine and human comprehension are categorically different (Chae and Nenkova, 2009) . Our research considers text simplified for human readers, but the findings can be used to identify features that discriminate simple text for both applications.",
"cite_spans": [
{
"start": 367,
"end": 389,
"text": "(Papineni et al., 2001",
"ref_id": null
},
{
"start": 612,
"end": 641,
"text": "(Callison-Burch et al., 2006;",
"ref_id": "BIBREF5"
},
{
"start": 642,
"end": 658,
"text": "Ye et al., 2007)",
"ref_id": null
},
{
"start": 771,
"end": 795,
"text": "(Chae and Nenkova, 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The process of TS can be divided into three aspects: removing extraneous or superfluous text, substituting more complex lexical and syntactic forms, and inserting information to offer further clarification where needed (Alu\u00edsio et al., 2008) . In this regard, TS is related to several different natural language processing tasks such as text summarization, compression, machine translation, and paraphrasing.",
"cite_spans": [
{
"start": 219,
"end": 241,
"text": "(Alu\u00edsio et al., 2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While none of these tasks alone directly provide a solution to text simplification, techniques can be drawn from each. Summarization techniques can be used to identify the crucial, most informative parts of a text and compression can be used to remove superfluous words and phrases. In fact, in the Wikipedia documents analyzed for this research, the average length of a \"simple\" document is only 21% the length of an \"ordinary\" English document (although this may be an unintentional byproduct of how articles were simplified, as discussed in section 6.1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we study the properties of language that differentiate simple from ordinary text for human readers. Specifically, we use statistical learning techniques to identify the most discriminative features of simple English and \"ordinary\" English using articles from Simple Wikipedia and English Wikipedia. We use cognitively motivated features as well as statistical measurements of a document's lexical, syntactic, and surface features. Our study demonstrates the validity and potential benefits of using Simple Wikipedia as a resource for TS research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Every person has the right to a name, in which is included a first name and surname. . . . The alias chosen for legal activities has the same protection as given to the name. Same text in simple language Every person has the right to have a name, and the law protects people's names. Also, the law protects a person's alias. . . . The name is made up of a first name and a surname (name = first name + surname). Table 1 : A text in ordinary and simple language from Alu\u00edsio et al. (2008) .",
"cite_spans": [
{
"start": 466,
"end": 487,
"text": "Alu\u00edsio et al. (2008)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 412,
"end": 419,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Ordinary text",
"sec_num": null
},
{
"text": "Wikipedia is a unique resource for natural language processing tasks due to its sheer size, accessibility, language diversity, article structure, interdocument links, and inter-language document alignments. Denoyer and Gallinari (2006) introduced the Wikipedia XML Corpus, with 1.5 million documents in eight languages from Wikipedia, that stored the rich structural information of Wikipedia with XML. This corpus was designed specifically for XML retrieval but has uses in natural language processing, categorization, machine translation, entity ranking, etc. YAWN (Schenkel et al., 2007) , a Wikipedia XML corpus with semantic tags, is another example of exploiting Wikipedia's structural information. Wikipedia provides XML site dumps every few weeks in all languages as well as static HTML dumps.",
"cite_spans": [
{
"start": 207,
"end": 235,
"text": "Denoyer and Gallinari (2006)",
"ref_id": "BIBREF12"
},
{
"start": 561,
"end": 589,
"text": "YAWN (Schenkel et al., 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia as a Corpus",
"sec_num": "2"
},
{
"text": "A diverse array of NLP research in the past few years has used Wikipedia, such as for word sense disambiguation (Mihalcea, 2007) , classification (Gantner and Schmidt-Thieme, 2009), machine translation (Smith et al., 2010), coreference resolution (Versley et al., 2008; Yang and Su, 2007) , sentence extraction for summarization (Biadsy et al., 2008) , information retrieval (M\u00fcller and Gurevych, 2008) , and semantic role labeling (Ponzetto and Strube, 2006) , to name a few. However, except for very recent work by Yatskar et al. (2010) , to our knowledge there has not been comparable research in using Wikipedia for text simplification.",
"cite_spans": [
{
"start": 112,
"end": 128,
"text": "(Mihalcea, 2007)",
"ref_id": null
},
{
"start": 247,
"end": 269,
"text": "(Versley et al., 2008;",
"ref_id": null
},
{
"start": 270,
"end": 288,
"text": "Yang and Su, 2007)",
"ref_id": null
},
{
"start": 329,
"end": 350,
"text": "(Biadsy et al., 2008)",
"ref_id": "BIBREF3"
},
{
"start": 375,
"end": 402,
"text": "(M\u00fcller and Gurevych, 2008)",
"ref_id": null
},
{
"start": 432,
"end": 459,
"text": "(Ponzetto and Strube, 2006)",
"ref_id": null
},
{
"start": 517,
"end": 538,
"text": "Yatskar et al. (2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia as a Corpus",
"sec_num": "2"
},
{
"text": "Hawking was the Lucasian Professor of Mathematics at the University of Cambridge for thirty years, taking up the post in 1979 and retiring on 1 October 2009.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ordinary Wikipedia",
"sec_num": null
},
{
"text": "Hawking was a professor of mathematics at the University of Cambridge (a position that Isaac Newton once had). He retired on October 1st 2009. What makes Wikipedia an excellent resource for text simplification is the new Simple Wikipedia project 1 , a collection of 58,000 English Wikipedia articles that have been rewritten in Simple English, which uses basic vocabulary and less complex grammar to make the content of Wikipedia accessible to students, children, adults with learning difficulties, and non-native English speakers. In addition to being a large corpus, these articles are linked to their ordinary Wikipedia counterparts, so for each article both a simple and an ordinary version are available. Furthermore, on inspection many articles in Simple Wikipedia appear to be copied and edited from the corresponding ordinary Wikipedia article. This information, together with revision history and flags signifying unsimplified text, can provide a scale of information on the text-simplification process previously unavailable. Example sentences from Simple Wikipedia and ordinary Wikipedia are shown in table 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Wikipedia",
"sec_num": null
},
{
"text": "We used articles from Simple Wikipedia and ordinary English Wikipedia to create a large corpus of simple and ordinary articles for our experiments. In order to experiment with models that work across domains, the corpus includes articles from nine of the primary categories identified in Simple Wikipedia: Everyday Life, Geography, History, Knowledge, Literature, Media, People, Religion, and Science. A total of 55,433 ordinary and 42,973 simple articles were extracted and processed from English Wikipedia and Simple Wikipedia, re-1 http://simple.wikipedia.org/ Coarse Tag Penn Treebank Tags DET DT, PDT ADJ JJ, JJR, JJS N NN, NNS, NP, NPS, PRP, FW ADV RB, RBR, RBS V VB, VBN, VBG, VBP, VBZ, MD WH WDT, WP, WP$, WRB spectively. Each document contains at least two sentences. Additionally, the corpus contains only the main text body of each article and does not consider info boxes, tables, lists, external and crossreferences, and other structural features. The experiments that follow randomly extract documents and sentences from this collection.",
"cite_spans": [],
"ref_spans": [
{
"start": 571,
"end": 710,
"text": "Tag Penn Treebank Tags DET DT, PDT ADJ JJ, JJR, JJS N NN, NNS, NP, NPS, PRP, FW ADV RB, RBR, RBS V VB, VBN, VBG, VBP, VBZ, MD WH",
"ref_id": null
}
],
"eq_spans": [],
"section": "Simple Wikipedia",
"sec_num": null
},
{
"text": "Before extracting features, we ran a series of natural language processing tools to preprocess the collection. First, all of the XML and \"wiki markup\" was removed. Each document was split into sentences using the Punkt sentence tokenizer (Kiss and Strunk, 2006) in NLTK (Bird and Loper, 2004) . We then parsed each sentence using the PCFG parser of Huang and Harper (2009), a modified version of the Berkeley parser (Petrov et al., 2006; Petrov and Klein, 2007) , for the tree structure and part-ofspeech tags.",
"cite_spans": [
{
"start": 238,
"end": 261,
"text": "(Kiss and Strunk, 2006)",
"ref_id": null
},
{
"start": 270,
"end": 292,
"text": "(Bird and Loper, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 416,
"end": 437,
"text": "(Petrov et al., 2006;",
"ref_id": null
},
{
"start": 438,
"end": 461,
"text": "Petrov and Klein, 2007)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Simple Wikipedia",
"sec_num": null
},
{
"text": "To evaluate the feasibility of learning simple and ordinary texts, we sought to identify text properties that differentiated between these classes. Using the two document collections, we constructed a simple binary classification task: label a piece of text as either simple or ordinary. The text was labeled according to its source: simple or ordinary Wikipedia. From each piece of text, we extracted a set of features designed to capture differences between the texts, using cognitively motivated features based on a document's lexical, syntactic, and surface features. We first describe our features and then our experimental setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Setup",
"sec_num": "3"
},
{
"text": "We began by examining the guidelines for writing Simple Wikipedia pages. 2 These guidelines suggest that articles use only the 1000 most common and basic English words and contain simple grammar and short sentences. Articles should be short but can be longer if they need to explain vocabulary words necessary to understand the topic. Additionally, words should appear on lists of basic English words, such as the Voice of America Special English words list (Voice Of America, 2009) or the Ogden Basic English list (Ogden, 1930) . Idioms should be avoided as well as compounds and the passive voice as opposed to a single simple verb.",
"cite_spans": [
{
"start": 73,
"end": 74,
"text": "2",
"ref_id": null
},
{
"start": 458,
"end": 482,
"text": "(Voice Of America, 2009)",
"ref_id": null
},
{
"start": 515,
"end": 528,
"text": "(Ogden, 1930)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "To capture these properties in the text, we created four classes of features: lexical, part-of-speech, surface, and parse. Several of our features have previously been used for measuring text fluency (Alu\u00edsio et al., 2008; Chae and Nenkova, 2009; Feng et al., 2009; Petersen and Ostendorf, 2007) . Feng et al. (2009) suggests that the document vocabulary is a good predictor of document readability. Simple texts are more likely to use basic words more often as opposed to more complicated, domain-specific words used in ordinary texts. To capture these features we used a unigram bag-of-words representation. We note that lexical features are unlikely to be useful unless we have access to a large training corpus that allowed the estimation of the relative frequency of words (Chae and Nenkova, 2009) . Additionally, we can expect lexical features to be very fragile for crossdomain experiments as they are especially susceptible to changes in domain vocabulary. Nevertheless, we include these features as a baseline in our experiments.",
"cite_spans": [
{
"start": 200,
"end": 222,
"text": "(Alu\u00edsio et al., 2008;",
"ref_id": "BIBREF0"
},
{
"start": 223,
"end": 246,
"text": "Chae and Nenkova, 2009;",
"ref_id": "BIBREF8"
},
{
"start": 247,
"end": 265,
"text": "Feng et al., 2009;",
"ref_id": null
},
{
"start": 266,
"end": 295,
"text": "Petersen and Ostendorf, 2007)",
"ref_id": null
},
{
"start": 298,
"end": 316,
"text": "Feng et al. (2009)",
"ref_id": null
},
{
"start": 778,
"end": 802,
"text": "(Chae and Nenkova, 2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "4"
},
{
"text": "Parts of speech. A clear focus of the simple text guidelines is grammar and word type. One way of representing this information is by measuring the relative frequency of different types of parts of speech. We consider simple unigram part-ofspeech tag information. We measured the normalized counts and relative frequency of part-ofspeech tags and counts of bigram part-of-speech tags in each piece of text. Since Devlin and Unthank (2006) has shown that word order (subject verb object (SVO), object verb subject (OVS), etc.) is correlated with readability, we also included a reduced tagset to capture grammatical patterns (table 3) . We also included normalized counts of these reduced tags in the model.",
"cite_spans": [
{
"start": 413,
"end": 438,
"text": "Devlin and Unthank (2006)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [
{
"start": 624,
"end": 633,
"text": "(table 3)",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Lexical. Previous work by",
"sec_num": null
},
{
"text": "While lexical items may be important, more general properties can be extracted from the lexical forms. We can also include features that correspond to surface information in the text. These features include document length, sentence length, word length, numbers of lexical types and tokens, and the ratio of types to tokens. All words are labeled as basic or not basic according to Ogden's Basic English 850 (BE850) list (Ogden, 1930) . 3 In order to measure the lexical complexity of a document, we include features for the number of BE850 words, the ratio of BE850 words to total words, and the type-token ratio of BE850 and non-BE850 words. Investigating the frequency and productivity of words not in the BE850 list will hopefully improve the flexibility of our model to work across domains and not learn any particular jargon. We also hope that the relative frequency and productivity measures of simple and non-simple words will codify the lexical choices of a sentence while avoiding the aforementioned problems with including specific lexical items. Table 4 shows the difference in some surface statistics in an aligned document from Simple and ordinary Wikipedia. In this example, nearly onethird of the words in the simple document are from the BE850 while less than a tenth of the words in the ordinary document are. Additionally, the productivity of words, particularly non-BE850 words, is much higher in the ordinary document. There are also clear differences in the length of the documents, and on average documents from ordinary Wikipedia are more than four times longer than documents from Simple Wikipedia.",
"cite_spans": [
{
"start": 421,
"end": 434,
"text": "(Ogden, 1930)",
"ref_id": null
},
{
"start": 437,
"end": 438,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1058,
"end": 1065,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Surface features.",
"sec_num": null
},
{
"text": "Syntactic parse. As previously mentioned, a number of Wikipedia's writing guidelines focus on general grammatical rules of sentence structure. Evidence of these rules may be captured in the syntactic parse of the sentences in the text. Chae and Nenkova (2009) studied text fluency in the context of machine translation and found strong correlations between parse tree structures and sentence fluency.",
"cite_spans": [
{
"start": 236,
"end": 259,
"text": "Chae and Nenkova (2009)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Surface features.",
"sec_num": null
},
{
"text": "In order to represent the structural complexity of the text, we collected extracted features from the parse trees. Our features included the frequency and length of noun phrases, verb phrases, prepositional phrases, and relative clauses (including embedded structures). We also considered relative ratios, such as the ratio of noun to verb phrases, prepositional to noun phrases, and relative clauses to noun phrases. We used the length of the longest noun phrase as a signal of complexity, and we also sought features that measured how typical the sentences were of English text. We included some of the features from the parser reranking work of Charniak and Johnson (2005) : the height of the parse tree and the number of right branches from the root of the tree to the furthest right leaf that is not punctuation.",
"cite_spans": [
{
"start": 648,
"end": 675,
"text": "Charniak and Johnson (2005)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Surface features.",
"sec_num": null
},
{
"text": "Using the feature sets described above, we evaluated a simple/ordinary text classifier in several settings on each category. First, we considered the task of document classification, where a classifier determines whether a full Wikipedia article was from ordinary English Wikipedia or Simple Wikipedia. For each category of articles, we measured accuracy on this binary classification task using 10-fold cross-validation. In the second setting, we consid- ered the performance of a sentence-level classifier. The classifier labeled each sentence as either ordinary or simple and we report results using 10-fold cross-validation on a random split of the sentences. For both settings we also evaluated a single classifier trained on all categories. We next considered cross-category performance: how would a classifier trained to detect differences between simple and ordinary examples from one category do when tested on another category. In this experiment, we trained a single classifier on data from a single category and used the classifier to label examples from each of the other categories. We report the accuracy on each category in these transfer experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "For learning we require a binary classifier training algorithm. We evaluated several learning algorithms for classification and report results for each one: a) MIRA-a large margin online learning algorithm (Crammer et al., 2006) . Online learning algorithms observe examples sequentially and up-date the current hypothesis after each observation; b) Confidence Weighted (CW) learning-a probabilistic large margin online learning algorithm (Dredze et al., 2008) ; c) Maximum Entropy-a log-linear discriminative classifier (Berger et al., 1996) ; and d) Support Vector Machines (SVM)-a large margin discriminator (Joachims, 1998) .",
"cite_spans": [
{
"start": 206,
"end": 228,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF10"
},
{
"start": 439,
"end": 460,
"text": "(Dredze et al., 2008)",
"ref_id": null
},
{
"start": 521,
"end": 542,
"text": "(Berger et al., 1996)",
"ref_id": "BIBREF2"
},
{
"start": 611,
"end": 627,
"text": "(Joachims, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "For each experiment, we used default settings of the parameters and 10 online iterations for the online methods (MIRA, CW). To create a fair comparison for each category, we limited the number of examples to a maximum of 2000.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "For the first task of document classification, we saw at least 90% mean accuracy with each of the classifiers. Using all features, SVM and Maximum Entropy performed almost perfectly. The online classifiers, CW and MIRA, displayed similar preference to the larger feature sets, lexical and part-of-speech counts. When using just lexical counts, both CW and MIRA were more accurate than the SVM and Maximum Entropy (reporting 92.95% and 86.55% versus 75.00% and 78.75%, respectively). For all classifiers, the models using the counts of part-ofspeech tags did better than classifiers trained on the surface features and on the parse features. This is surprising, since we expected the surface features to be robust predictors of the document class, mainly because the average ordinary Wikipedia article in our corpus is about four times longer than the average Simple Wikipedia article. We also expected the syntactic features to be a strong predictor of the document class since more complicated parse trees correspond to more complex sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "For each classifier, we looked at its performance without its less predictive feature categories, and for CW the inclusion of the surface features decreased performance noticeably. The best CW classifiers used either part-of-speech and lexical features (95.95%) or just part-of-speech features (95.80%). The parse features, which by themselves only yielded 64.60% accuracy, when combined with part-of-speech and lexical features showed high accuracy as well (95.60%). MIRA also showed higher accuracy when surface features were not included (from 97.50% mean accuracy with all features to 97.75% with all but surface features).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The best SVM classifier used all four feature classes, but had nearly as good accuracy with just part-of-speech counts and surface features (99.85% mean accuracy) and with surface and parse features (also 99.85% accuracy). Maximum Entropy, on the other hand, improved slightly when the lexical and parse features were not included (from 99.45% mean accuracy with all feature classes to 99.55%).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We examined the weights learned by the classifiers to determine the features that were effective for learning. We selected the features with the highest absolute weight for a MIRA classifier trained on all categories. The most predictive features for document classification were the sentence length (shorter favors Simple), the length of the longest NP (longer favors ordinary), the number of sentences (more favors ordinary), the average number of prepositional phrases and noun phrases per sentence, the height of the parse tree, and the number of adjectives. The most predictive features for sentence classification were the ratio of different tree non-terminals (VP, S, NP, S-Bar) to the number of words in the sentence, the ratio of the total height of the productions in a tree to the height of the tree, and the extent to which the tree was right branching. These features are consistent with the rules described above for simple text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Next we looked at a pairwise comparison of how the classifiers perform when trained on one category and tested on another. Surprisingly, the results were robust across categories, across classifiers. Using the best feature class as determined in the first task, the average drop in accuracy when trained on each domain was very low across all classifiers (the mean accuracy rate of each cross-category classification was at least 90%). Table 6 shows the mean change in accuracy from CW models trained and tested on the same category to the models trained and tested on different categories. When trained on the Everyday Life category, the model actually showed a mean increase in accuracy when predicting other categories.",
"cite_spans": [],
"ref_spans": [
{
"start": 436,
"end": 443,
"text": "Table 6",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In the final task, we trained binary classifiers to identify simple sentences in isolation. The mean accuracy was lower for this task than for the document classification task, and we anticipated individual sentences to be more difficult to classify because each sentence only carries a fraction of the information held in an entire document. It is common to have short, simple sentences as part of ordinary English text, although they will not make up the whole. However results were still promising, with between 72% and 80% mean accuracy. With CW and MIRA, the classifiers benefited from training on all categories, while MaxEnt and SVM in-category and all-category models achieved similar accuracy levels, but the results on cross-category tests were more variable than in the document classification. There was also no consistency across features and classifiers with regard to category-to-category classification. Overall the results of the sentence classification task are encouraging and show promise for detecting individual simple sentences taken out of context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The classifiers performed robustly for the documentlevel classification task, although the corpus itself may have biased the model due to the longer average length of ordinary documents, which we tried to address by filtering out articles with only one or two sentences. Cursory inspection suggests that there is overlap between many Simple Wikipedia articles and their corresponding ordinary English articles, since a large number of Simple Wikipedia documents appear to be generated directly from the English Wikipedia articles with more complicated subsections of the documents omitted from the Simple article.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.1"
},
{
"text": "The sentence classification task could be improved by better labeling of sentences. In these experiments, we assumed that every sentence in an ordinary document would be ordinary (i.e., not simple) and vice versa for simple documents. However it is not the case that ordinary English text contains only complicated sentences. In future research we can use human annotated sentences for building the classifiers. The features we used in this research suggest that simple text is created from categorical lexical and syntactic replacement, but more complicated, technical, or detailed oriented text may require more rewriting, and would be of more interest in future research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6.1"
},
{
"text": "We have demonstrated the ability to automatically identify texts as either simple or ordinary at both the document and sentence levels using a variety of features based on the word usage and grammatical structures in text. Our statistical analysis has identified relevant features for this task accessible to computational systems. Immediate applications of the classifiers created in this research for text simplification include editing tools that can identify parts of a text that may be difficult to understand or for word processors, in order to notify writers of complicated sentences in real time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Using this initial exploration of Simple Wikipedia, we plan to continue working in a number of directions. First, we will explore additional robust indications of text difficulty. For example, Alu\u00edsio et al. (2008) claim that sentences that are easier to read are also easier to parse, so the entropy of the parser or confidence in the output may be indicative of a text's difficulty. Additionally, language models trained on large corpora can assign probability scores to texts, which may indicate text difficulty. Of particular interest are syntactic language models that incorporate some of the syntactic observations in this paper (Filimonov and Harper, 2009) .",
"cite_spans": [
{
"start": 193,
"end": 214,
"text": "Alu\u00edsio et al. (2008)",
"ref_id": "BIBREF0"
},
{
"start": 635,
"end": 663,
"text": "(Filimonov and Harper, 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Our next goal will be to look at parallel sentences to learn rules for simplifying text. One of the advantages of the Wikipedia collection is the parallel articles in ordinary English Wikipedia and Simple Wikipedia. While the content of the articles can differ, these are excellent examples of comparable texts that can be useful for learning simplification rules. Such learning can draw from machine translation, which learns rules that translate between languages. The related task of paraphrase extraction could also provide comparable phrases, one of which can be identified as a simplified version of the other (Bannard and Callison-Burch, 2005 ). An additional resource available in Simple Wikipedia is the flagging of articles as not simple. By examining the revision history of articles whose flags have been changed, we can discover changes that simplified texts. Initial work on this topic has automatically learned which edits correspond to text simplifications (Yatskar et al., 2010) .",
"cite_spans": [
{
"start": 629,
"end": 649,
"text": "Callison-Burch, 2005",
"ref_id": "BIBREF1"
},
{
"start": 973,
"end": 995,
"text": "(Yatskar et al., 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "Text simplification may necessitate the removal of whole phrases, sentences, or even paragraphs, as, according to the writing guidelines for Wikipedia Simple (Wikipedia, 2009) , the articles should not exceed a specified length, and some concepts may not be explainable using the lexicon of Basic English. In some situations, adding new text to explain confusing but crucial points may serve to aid the reader, and text generation needs to be further investigated to make text simplification an automatic process.",
"cite_spans": [
{
"start": 151,
"end": 175,
"text": "Simple (Wikipedia, 2009)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "7"
},
{
"text": "http://simple.wikipedia.org/wiki/Wikipedia: Simple_English_Wikipedia",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Wikipedia advocates using words that appear on the BE850 list. Ogden also provides extended Basic English vocabulary lists, totaling 2000 Basic English words, but these words tend to be more specialized or domain specific. For the purposes of this study only words in BE850 were used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Mary Harper for her help in parsing our corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Brazilian portuguese automatic text simplification systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Alu\u00edsio",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "T",
"middle": [
"A S"
],
"last": "Pardo",
"suffix": ""
},
{
"first": "E",
"middle": [
"G"
],
"last": "Maziero",
"suffix": ""
},
{
"first": "R",
"middle": [
"P M"
],
"last": "Fortes",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Alu\u00edsio, L. Specia, T.A.S. Pardo, E.G. Maziero, and R.P.M. Fortes. 2008. Brazilian portuguese automatic text simplification systems. In DocEng.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Paraphrasing with bilingual parallel corpora",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Bannard",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2005,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Bannard and Chris Callison-Burch. 2005. Para- phrasing with bilingual parallel corpora. In Associa- tion for Computational Linguistics (ACL).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "V",
"middle": [
"J D"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "S",
"middle": [
"A D"
],
"last": "Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational linguistics",
"volume": "22",
"issue": "1",
"pages": "39--71",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A.L. Berger, V.J.D. Pietra, and S.A.D. Pietra. 1996. A maximum entropy approach to natural language pro- cessing. Computational linguistics, 22(1):39-71.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An unsupervised approach to biography production using Wikipedia",
"authors": [
{
"first": "F",
"middle": [],
"last": "Biadsy",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Hirschberg",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Filatova",
"suffix": ""
},
{
"first": "Llc",
"middle": [],
"last": "Inforsense",
"suffix": ""
}
],
"year": 2008,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Biadsy, J. Hirschberg, E. Filatova, and LLC InforS- ense. 2008. An unsupervised approach to biography production using Wikipedia. In Association for Com- putational Linguistics (ACL).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "NLTK: The natural language toolkit",
"authors": [
{
"first": "S",
"middle": [],
"last": "Bird",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Loper",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the ACL demonstration session",
"volume": "",
"issue": "",
"pages": "214--217",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Bird and E. Loper. 2004. NLTK: The natural lan- guage toolkit. Proceedings of the ACL demonstration session, pages 214-217.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Reevaluating the role of BLEU in machine translation research",
"authors": [
{
"first": "C",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2006,
"venue": "European Conference for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "249--256",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Callison-Burch, M. Osborne, and P. Koehn. 2006. Re- evaluating the role of BLEU in machine translation re- search. In European Conference for Computational Linguistics (EACL), volume 2006, pages 249-256.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Cohesive generation of syntactically simplified newspaper text",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Canning",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tait",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Archibald",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Crawley",
"suffix": ""
}
],
"year": 2000,
"venue": "Lecture notes in computer science",
"volume": "",
"issue": "",
"pages": "145--150",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Canning, J. Tait, J. Archibald, and R. Crawley. 2000. Cohesive generation of syntactically simplified news- paper text. Lecture notes in computer science, pages 145-150.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Simplifying text for languageimpaired readers",
"authors": [
{
"first": "J",
"middle": [],
"last": "Carroll",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Minnen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Pearce",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Canning",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tait",
"suffix": ""
}
],
"year": 1999,
"venue": "European Conference for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "269--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Carroll, G. Minnen, D. Pearce, Y. Canning, S. Devlin, and J. Tait. 1999. Simplifying text for language- impaired readers. In European Conference for Com- putational Linguistics (EACL), pages 269-270.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Predicting the fluency of text with shallow structural features",
"authors": [
{
"first": "J",
"middle": [],
"last": "Chae",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "European Conference for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "139--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Chae and A. Nenkova. 2009. Predicting the fluency of text with shallow structural features. In European Conference for Computational Linguistics (EACL), pages 139-147.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Coarse-to-fine n-best parsing and MaxEnt discriminative reranking",
"authors": [
{
"first": "E",
"middle": [],
"last": "Charniak",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2005,
"venue": "Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Charniak and M. Johnson. 2005. Coarse-to-fine n-best parsing and MaxEnt discriminative reranking. In As- sociation for Computational Linguistics (ACL), page 180. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Shai Shalev-Shwartz, and Yoram Singer",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Keshet",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. Journal of Machine Learning Research (JMLR).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic sentence simplification for subtitling in Dutch and English",
"authors": [
{
"first": "W",
"middle": [],
"last": "Daelemans",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "H\u00f6thker",
"suffix": ""
},
{
"first": "E Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
}
],
"year": 2004,
"venue": "Conference on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "1045--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Daelemans, A. H\u00f6thker, and E Tjong Kim Sang. 2004. Automatic sentence simplification for subtitling in Dutch and English. In Conference on Language Re- sources and Evaluation (LREC), pages 1045-1048.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Wikipedia XML Corpus. SIGIR Forum",
"authors": [
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Gallinari",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ludovic Denoyer and Patrick Gallinari. 2006. The Wikipedia XML Corpus. SIGIR Forum.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Helping aphasic people process online information",
"authors": [
{
"first": "S",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Unthank",
"suffix": ""
}
],
"year": 2006,
"venue": "SIGACCESS Conference on Computers and Accessibility",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Devlin and G. Unthank. 2006. Helping aphasic peo- ple process online information. In SIGACCESS Con- ference on Computers and Accessibility.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"html": null,
"num": null,
"content": "
: Comparable sentences from the ordinary |
Wikipedia and Simple Wikipedia entry for \"Stephen |
Hawking.\" |
",
"text": "",
"type_str": "table"
},
"TABREF1": {
"html": null,
"num": null,
"content": "",
"text": "A mapping of the Penn Treebank tags to a coarse tagset used to generate features.",
"type_str": "table"
},
"TABREF3": {
"html": null,
"num": null,
"content": "",
"text": "",
"type_str": "table"
},
"TABREF5": {
"html": null,
"num": null,
"content": "Feature class | Features |
Lexical | 522,153 |
Part of speech | 2478 |
tags | 45 |
tag pairs | 1972 |
tags (reduced) | 22 |
tag pairs (reduced) | 484 |
Parse | 11 |
Surface | 9 |
",
"text": "The number of examples available in each category. To compare experiments in each category we used at most 2000 instances in each experiment.",
"type_str": "table"
},
"TABREF6": {
"html": null,
"num": null,
"content": "",
"text": "The number of features in each feature class.",
"type_str": "table"
},
"TABREF8": {
"html": null,
"num": null,
"content": "Classifier | All features | POS | Surface | Parse |
CW | 73.20% | 74.45% | 57.40% | 62.25% |
MIRA | 71.15% | 72.65% | 56.50% | 56.45% |
MaxEnt | 80.80% | 77.65% | 71.30% | 69.00% |
SVM | 77.00% | 76.40% | 72.55% | 73.00% |
",
"text": "Mean accuracy of all classifiers on the document classification task.",
"type_str": "table"
},
"TABREF9": {
"html": null,
"num": null,
"content": "Category | Mean accuracy change |
Everyday life | +1.42% |
Geography | \u22124.29% |
History | \u22121.01% |
Literature | \u22121.84% |
Media | \u22120.56% |
People | \u22120.20% |
Religion | \u22120.56% |
Science | \u22122.50% |
",
"text": "Mean accuracy of all classifiers on the sentence classification task.",
"type_str": "table"
},
"TABREF10": {
"html": null,
"num": null,
"content": "",
"text": "Mean accuracy drop for a CW model trained on one category and tested on all other categories. Negative numbers indicate a decrease in performance.",
"type_str": "table"
}
}
}
}