{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:33:18.199241Z" }, "title": "From Web Crawl to Clean Register-Annotated Corpora", "authors": [ { "first": "Veronika", "middle": [], "last": "Laippala", "suffix": "", "affiliation": { "laboratory": "Turku NLP Group", "institution": "University of Turku", "location": { "settlement": "Turku", "country": "Finland" } }, "email": "" }, { "first": "Samuel", "middle": [], "last": "R\u00f6nnqvist", "suffix": "", "affiliation": { "laboratory": "Turku NLP Group", "institution": "University of Turku", "location": { "settlement": "Turku", "country": "Finland" } }, "email": "" }, { "first": "Saara", "middle": [], "last": "Hellstr\u00f6m", "suffix": "", "affiliation": { "laboratory": "Turku NLP Group", "institution": "University of Turku", "location": { "settlement": "Turku", "country": "Finland" } }, "email": "" }, { "first": "Juhani", "middle": [], "last": "Luotolahti", "suffix": "", "affiliation": { "laboratory": "Turku NLP Group", "institution": "University of Turku", "location": { "settlement": "Turku", "country": "Finland" } }, "email": "" }, { "first": "Liina", "middle": [], "last": "Repo", "suffix": "", "affiliation": { "laboratory": "Turku NLP Group", "institution": "University of Turku", "location": { "settlement": "Turku", "country": "Finland" } }, "email": "" }, { "first": "Anna", "middle": [], "last": "Salmela", "suffix": "", "affiliation": { "laboratory": "Turku NLP Group", "institution": "University of Turku", "location": { "settlement": "Turku", "country": "Finland" } }, "email": "" }, { "first": "Valtteri", "middle": [], "last": "Skantsi", "suffix": "", "affiliation": { "laboratory": "Turku NLP Group", "institution": "University of Turku", "location": { "settlement": "Turku", "country": "Finland" } }, "email": "valtteri.skantsi@utu.fi" }, { "first": "Sampo", "middle": [], "last": "Pyysalo", "suffix": "", "affiliation": { "laboratory": "Turku NLP Group", "institution": "University of Turku", "location": { "settlement": "Turku", "country": "Finland" } }, "email": "sampo.pyysalo@utu.fi" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The web presents unprecedented opportunities for large-scale collection of text in many languages. However, two critical steps in the development of web corpora remain challenging: the identification of clean text from source HTML and the assignment of genre or register information to the documents. In this paper, we evaluate a multilingual approach to this end. Our starting points are the Swedish and French Common Crawl datasets gathered for the 2017 CoNLL shared task, particularly the URLs. We 1) fetch HTML pages based on the URLs and run boilerplate removal, 2) train a classifier to further clean out undesired text fragments, and 3) annotate text registers. We compare boilerplate removal against the CoNLL texts, and find an improvement. For the further cleaning of undesired material, the best results are achieved using Multilingual BERT with monolingual fine-tuning. However, our results are promising also in a cross-lingual setting, without fine-tuning on the target language. Finally, the register annotations show that most of the documents belong to a relatively small set of registers which are relatively similar in the two languages. A number of additional flags in the annotation are, however, necessary to reflect the wide range of linguistic variation associated with the documents.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The web presents unprecedented opportunities for large-scale collection of text in many languages. However, two critical steps in the development of web corpora remain challenging: the identification of clean text from source HTML and the assignment of genre or register information to the documents. In this paper, we evaluate a multilingual approach to this end. Our starting points are the Swedish and French Common Crawl datasets gathered for the 2017 CoNLL shared task, particularly the URLs. We 1) fetch HTML pages based on the URLs and run boilerplate removal, 2) train a classifier to further clean out undesired text fragments, and 3) annotate text registers. We compare boilerplate removal against the CoNLL texts, and find an improvement. For the further cleaning of undesired material, the best results are achieved using Multilingual BERT with monolingual fine-tuning. However, our results are promising also in a cross-lingual setting, without fine-tuning on the target language. Finally, the register annotations show that most of the documents belong to a relatively small set of registers which are relatively similar in the two languages. A number of additional flags in the annotation are, however, necessary to reflect the wide range of linguistic variation associated with the documents.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Traditionally, linguistic corpora are collected in order to represent a language or a specific part of it (McEnery and Wilson, 1996; Biber et al., 1998; Kyt\u00f6 and Ludeling, 2008) . Typically, in order to do so, corpora are composed of texts chosen to represent different genres or registers, that is, situationally defined text varieties such as news, blogs or discussion forum comments (Biber, 1988) . Many webbased language resources diverge from this process by not being based on detailed compilation criteria (see, however, Sch\u00e4fer (2016c)). Instead of the collection of coherent, high-quality text, the construction of web language resources commonly emphasizes gathering as much data as possible, for instance by using a dedicated crawl or extracting data from existing crawl-based datasets, such as Common Crawl 1 . As crawling and compilation pipelines are based on automatic processes, the resulting data can contain boilerplate texts, machine translations, and even text in languages other than that targeted in the corpus construction. Furthermore, there is typically no information on the kinds of registers that the web language resources represent. Although both linguistic and NLP efforts have achieved significant advances using web data (e.g. Mikolov et al. (2013) , Bojanowski et al. (2017) , Yang et al. (2019) ), for a number of end uses, better structured web language resources with clean, full texts and register information would be essential to realizing their full potential. Currently, a number of large web-crawled datasets are available. However, resources emphasizing the collection of clean texts, such as WaCky (Baroni et al., 2009) and COW (Sch\u00e4fer, 2016b) , represent only a limited number of languages. The ones with a more extensive selection of languages, such as OSCAR (Ortiz Su\u00e1rez et al., 2019) , have not gone through detailed text cleaning processes. Moreover, register information or further NLP processing steps 1 https://commoncrawl.org such as syntactic analysis is typically not included at all. In this paper, we present efforts toward the automatic creation of multilingual web-based language resources that consist of coherent, clean texts and include similar metadata to what traditional language resources have, in particular registers identified using a detailed, systematic register hierarchy. By coherent texts, we understand texts where each text part is linked to the others to form a full, meaningful whole (Halliday, 1976) . Our starting point is the Common Crawl dataset gathered for the 2017 CoNLL shared task (Ginter et al., 2017) . Altogether, the dataset includes 56 languages, but in this paper, we focus on the Swedish and French collections. We 1) fetch pages from the URLs found in the collections and run boilerplate removal on the raw HTML, 2) train a classifier to further remove undesired text fragments that may remain, and 3) annotate text registers. The registers, such as News report or Description with intent to sell, are annotated using the taxonomy presented for English by Egbert et al. (2015) and also applied in Finnish by Laippala et al. (2019) . To evaluate the need for boilerplate removal, we compare three versions of the data that have gone through different cleaning processes: 1) texts as included in the CoNLL collections, 2) raw texts after simple removal of markup from the fetched HTML pages, and 3) texts from the HTML pages cleaned of boilerplate and other unwanted elements using the web scraping tool Trafilatura 2 . The process is described in Figure 1 . We make all the resources introduced in this effort freely available under open licences at https://github. com/TurkuNLP/WAC-XII.", "cite_spans": [ { "start": 106, "end": 132, "text": "(McEnery and Wilson, 1996;", "ref_id": "BIBREF14" }, { "start": 133, "end": 152, "text": "Biber et al., 1998;", "ref_id": "BIBREF3" }, { "start": 153, "end": 177, "text": "Kyt\u00f6 and Ludeling, 2008)", "ref_id": "BIBREF12" }, { "start": 386, "end": 399, "text": "(Biber, 1988)", "ref_id": "BIBREF4" }, { "start": 1260, "end": 1281, "text": "Mikolov et al. (2013)", "ref_id": "BIBREF15" }, { "start": 1284, "end": 1308, "text": "Bojanowski et al. (2017)", "ref_id": "BIBREF5" }, { "start": 1311, "end": 1329, "text": "Yang et al. (2019)", "ref_id": "BIBREF25" }, { "start": 1643, "end": 1664, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF1" }, { "start": 1673, "end": 1689, "text": "(Sch\u00e4fer, 2016b)", "ref_id": "BIBREF21" }, { "start": 1807, "end": 1834, "text": "(Ortiz Su\u00e1rez et al., 2019)", "ref_id": null }, { "start": 2465, "end": 2481, "text": "(Halliday, 1976)", "ref_id": "BIBREF10" }, { "start": 2571, "end": 2592, "text": "(Ginter et al., 2017)", "ref_id": "BIBREF8" }, { "start": 3054, "end": 3074, "text": "Egbert et al. (2015)", "ref_id": "BIBREF7" }, { "start": 3106, "end": 3128, "text": "Laippala et al. (2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 3544, "end": 3552, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Web-based language resources are widely applied both in linguistic and NLP research. The WaCky Corpus Collection (Baroni et al., 2009) with more than a billion words in Figure 1 : Text preprocessing and annotation process. Three versions of text are manually evaluated: 1) texts taken directly from the CoNLL version of Common Crawl that have undergone a cleaning process, 2) raw texts extracted from HTML based on the CoNLL URLs, and 3) texts extracted from CoNLL URLs by the boilerplate removal system (Trafilatura) English, French, Italian and German was one of the earliest ones and perhaps mainly targeted at research questions in linguistics. Similarly, the COW corpora (Sch\u00e4fer, 2016b) are linguistically processed and include billions of words in six European languages. CommonCrawl is a free and openly available web crawl maintained by the Com-monCrawl foundation. The dataset is available at Amazon EC2-cloud as both plain text and HTML. The data totals petabytes in size. Lately the Common Crawl dataset has been used to gather text corpora for a number of NLP projects, such as the recently introduced massive multilingual corpus OSCAR (Ortiz Su\u00e1rez et al., 2019) . An important part of processing web-based datasets for use in linguistic and NLP research is the extraction of the main body of the text and the removal of boilerplate text, such as lists, links and other unwanted material. These decrease the quality of the data as they brake the coherence of the texts by not including full sentences and by presenting individual, repetitive segments such as copyright indications. In the existing web-based language resources, the cleaning process is performed in different ways. The WaCkies (Baroni et al., 2009) use regular expressions and heuristic rules to remove boilerplates. The heuristics are based on the idea that HTML tags co-occur frequently with boilerplates, whereas the document parts with low HTML tag density belong often to main text body. Cow corpora (Sch\u00e4fer et al., 2013) are processed based on a detailed pipeline with a tool classifying paragraphs as boilerplate or not (Sch\u00e4fer, 2016a ) and a another one classifying entire documents as coherent text or not (Sch\u00e4fer et al., 2013) . These are based on manually annotated data and a document-level unsupervised method to evaluate the text quality based on short and very frequent words. To create the monolingual OSCAR subcorpora, Ortiz Su\u00e1rez et al. (2019) processed Common Crawl data using a pipeline based on the system by Grave et al. (2018) , which included language detection using fastText (Joulin et al., 2016) Thus, several well-developed web corpus resources and ready-made solutions for boilerplate removal and text cleaning exist. In contrast, the addition of register information to web-scale corpora is not yet common practice and involves many challenges. A first challenge has been the lack of annotated corpora that represent all the registers found online. Because of this, there has been no training data available to develop web register identification systems that could be applied to classify web-based language resources. Two large corpora with register annotations exist for English, the Leeds Web Genre Corpus (Asheghi et al., 2016) and the Corpus of Online Registers of English (CORE) (Egbert et al., 2015) . A small collection of online registers has also been released for Finnish (Laippala et al., 2019) . Second, another challenge with online registers is that online language use cannot necessarily be described in terms of discrete register categories. For instance, an online text might simultaneously have characteristics of a news article and a persuasive text. Thus, discrete register classification systems where each document belongs to exactly one register category do not necessarily suit web data sets very well. To solve this, the CORE corpus includes hybrid register categories that combine several register labels, such as narrative+opinion (see Biber and Egbert (2018) ). Another solution is suggested by Sharoff (2018) , who analyzes registers by describing texts based on proportions of dimensions, such as argumentative or hard news.", "cite_spans": [ { "start": 113, "end": 134, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF1" }, { "start": 504, "end": 517, "text": "(Trafilatura)", "ref_id": null }, { "start": 676, "end": 692, "text": "(Sch\u00e4fer, 2016b)", "ref_id": "BIBREF21" }, { "start": 1149, "end": 1176, "text": "(Ortiz Su\u00e1rez et al., 2019)", "ref_id": null }, { "start": 1707, "end": 1728, "text": "(Baroni et al., 2009)", "ref_id": "BIBREF1" }, { "start": 1985, "end": 2007, "text": "(Sch\u00e4fer et al., 2013)", "ref_id": "BIBREF19" }, { "start": 2108, "end": 2123, "text": "(Sch\u00e4fer, 2016a", "ref_id": "BIBREF20" }, { "start": 2197, "end": 2219, "text": "(Sch\u00e4fer et al., 2013)", "ref_id": "BIBREF19" }, { "start": 2514, "end": 2533, "text": "Grave et al. (2018)", "ref_id": "BIBREF9" }, { "start": 2585, "end": 2606, "text": "(Joulin et al., 2016)", "ref_id": "BIBREF11" }, { "start": 3223, "end": 3245, "text": "(Asheghi et al., 2016)", "ref_id": "BIBREF0" }, { "start": 3299, "end": 3320, "text": "(Egbert et al., 2015)", "ref_id": "BIBREF7" }, { "start": 3397, "end": 3420, "text": "(Laippala et al., 2019)", "ref_id": "BIBREF13" }, { "start": 3978, "end": 4001, "text": "Biber and Egbert (2018)", "ref_id": "BIBREF2" }, { "start": 4038, "end": 4052, "text": "Sharoff (2018)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 169, "end": 177, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Related work", "sec_num": "2." }, { "text": "In this section, we present the CoNLL data we use as source, the preprocessing steps we applied, and the annotation processes we performed. The overall workflow is presented in Figure 1 . Table 2 : Example of text quality annotation for Swedish data. Lines marked with the label 1 are judged to be part of the main body of the text.", "cite_spans": [], "ref_spans": [ { "start": 177, "end": 185, "text": "Figure 1", "ref_id": null }, { "start": 188, "end": 195, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Data and annotation", "sec_num": "3." }, { "text": "The source data for our study is gathered from the Common Crawl-based dataset prepared for the 2017 CoNLL shared task (Ginter et al., 2017) . The Common Crawl data is available on the Amazon cloud, which was used for data collection and language detection. The Compact Language Detect 2 (CLD2) language detector 6 was applied in processing due to its speed and the availability of python bindings. For each processed plain text input file, the first 100 000 tokens per language were kept, and deduplication based on URLs was performed. The resulting dataset is composed of altogether 56 languages and nearly 100 billion words. The statistics of the French and Swedish collections used in this study are summarized in Table 1 .", "cite_spans": [ { "start": 118, "end": 139, "text": "(Ginter et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 717, "end": 724, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Source data", "sec_num": "3.1." }, { "text": "We evaluate the quality of texts by manually annotating three text versions that have gone through different cleaning processes: 1) text as they are included in the CoNLL data, 2) raw texts extracted from HTML source without boilerplate removal, and 3) texts extracted from HTML and processed with Trafilatura to remove boilerplate material. The raw texts are included in order to assess whether any good text content may have been lost and to provide an up-todate point of reference for Trafilatura, as some of the online documents may have changed after the collection of the original source data in 2017. The evaluation was done by 1) selecting from CoNLL data 40 documents (20 in Swedish and 20 in French) with active URLs and 2) manually annotating the quality of all three versions of these documents. The annotation was done on a line-by-line basis, coding which lines are part of the coherent texts and which are part of boilerplate. 7 To define boilerplate, we followed Sch\u00e4fer (2016a), according to whom boilerplate is all material that \"remains after markup stripping, and which does not belong to one of those blocks of content on the web page that contain coherent text.\"", "cite_spans": [ { "start": 942, "end": 943, "text": "7", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Text quality annotation", "sec_num": "3.2." }, { "text": "The annotation was performed by four annotators in total. Annotations were done individually, but difficult cases were discussed jointly with an annotation coordinator. Although many lines and text segments are easy to define as not belonging to the coherent text, the quality annotation was by no means a trivial task. Many lines could have been defined as either coherent text or boilerplate. Examples of undesired lines include links and lists of words or headlines that 6 https://github.com/CLD2Owners/cld2 7 Lines correspond broadly to blocks of text uninterrupted by tags in the source HTML, such as titles or paragraphs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Text quality annotation", "sec_num": "3.2." }, { "text": "News were not connected to body text, e.g., when serving as links to other pages. Automatically generated text was similarly excluded, e.g., headlines in a banner, phrases such as visa mer 'show more' and f\u00e4ll ihop 'hide'. Table 2 shows examples of lines annotated as belonging to the text and lines annotated as undesired material. The first line is a headline describing the text to come and its topic: the visit of the Princess Madeleine of Sweden. As this headline is not followed by other headlines, it is considered as belonging to the coherent text. The next two lines, in turn, are both links to other parts of the website. They do not belong to the coherent text and are thus annotated as undesired material to be rejected.", "cite_spans": [], "ref_spans": [ { "start": 223, "end": 230, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Narrative", "sec_num": null }, { "text": "The register-annotated documents are sampled from the CoNLL data. The register annotation follows the register taxonomy presented for the English CORE corpus by Egbert et al. (2015) and for Finnish by Laippala et al. (2019) . The advantage of this taxonomy is that it is developed in a data-driven manner and it covers the full range of registers and linguistic variation found online. Furthermore, as dis-cussed in Section 2., the annotation allows the assignment of multiple register labels for one document, which guarantees that the annotation covers the full range of language use in web documents. The taxonomy is hierarchical with eight main register classes with functional labels. These are divided into a number of sub-register categories that are perhaps more intuitive, such as News report and Review. The taxonomy is presented in Table 3 . In the English CORE for which this taxonomy was developed, each document was annotated by four coders, and hybrid annotations resulted from consistent disagreements among the coders. In our study, we do not have the resources to have such an extensive annotation process. Instead, documents were first double-annotated, and when a certain level of agreement and confidence was found between the coders, the process was changed to single annotation. However, difficult cases were always discussed and resolved jointly. In our setting, during the annotation, annotators could select several register labels for a document when necessary to fully characterize it. This allows the direct annotation of hybrid documents even by a single annotator. Moreover, if the document could not be described by a specific sub-register label, annotators could select a more general, main register label only. The annotations were done using a custom annotation tool. The tool provides annotators with a wide selection of flags that can be toggled to identify additional aspects of the texts. The set of flags was developed during the annotation with the objective of marking text properties that may have an effect on the further analysis of the data. For instance, these include untypical for the register and multiple texts.", "cite_spans": [ { "start": 161, "end": 181, "text": "Egbert et al. (2015)", "ref_id": "BIBREF7" }, { "start": 201, "end": 223, "text": "Laippala et al. (2019)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 843, "end": 850, "text": "Table 3", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Register annotation", "sec_num": "3.3." }, { "text": "We next describe our approach to training and evaluating methods for further cleaning the texts after boilerplate removal. We experiment with two supervised machine learning methods:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Classifiers for further cleaning", "sec_num": "4." }, { "text": "BERT (Devlin et al., 2018 ) is a deep transfer learning approach based on the Transformer architecture (Vaswani et al., 2017) . We apply the Multilingual BERT (mBERT) model released by Google 8 , which has been pre-trained on a combination of Wikipedia texts in 104 languages, including French and Swedish. In addition to monolingual classification in the two languages, we apply mBERT also in multilingual and cross-lingual training setups. Following Devlin et al. 2018, we add a final classification layer to the pre-trained transformer stack, and fine-tune all model weights. fastText (Joulin et al., 2016) is a text classification tool emphasizing computational efficiency, making it a popular choice for machine learning on web-scale data. We apply fastText as a baseline method using the supervised text classification facilities of the tool. We train and evaluate BERT and fastText in the basic binary classification setting where each line is labelled as either 0 (rejected) or 1 (accepted). We divide both the French and the Swedish datasets into training, development, and test subsets on the document level, so that text drawn from a single document is only included in exactly one of the subsets. We perform a random stratified split so that the positive/negative distribution of each subset roughly matches that of the whole dataset (max. 2% point deviation). The test subsets were held out during method development and parameter selection. For BERT, we perform a grid search on maximum sequence length, learning rate, batch size and number of training epochs, while evaluating on the development set. For fast-Text, we select the maximum number of word n-grams and the number of training epochs using grid search on the development data. We additionally evaluate the effect of initializing the word vectors for the method using pre-trained language-specific word vectors (Grave et al., 2018) . We evaluate classification performance primarily in terms of accuracy, i.e. the proportion of texts that are predicted to have the correct class. We additionally report precision and recall, summarizing performance across different classification thresholds with precision-recall curves.", "cite_spans": [ { "start": 5, "end": 25, "text": "(Devlin et al., 2018", "ref_id": "BIBREF6" }, { "start": 103, "end": 125, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF24" }, { "start": 588, "end": 609, "text": "(Joulin et al., 2016)", "ref_id": "BIBREF11" }, { "start": 1886, "end": 1906, "text": "(Grave et al., 2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Classifiers for further cleaning", "sec_num": "4." }, { "text": "In this section, we present the results of the evaluations. We start with the analysis of the text quality based on the manual annotations, then move on to the machine learning experiments to further clean the texts from undesired material, and finally analyze the register annotations. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results", "sec_num": "5." }, { "text": "The results on the manual evaluation of the text quality are presented in Table 4 for the CoNLL texts, in Table 5 for the raw text, and in Table 6 for the text processed with Trafilatura to remove boilerplate (see Section 3.2.). In the source CoNLL data, 48% of the words in French and 22% of the words in Swedish were evaluated as rejected, i.e., they appeared on lines that were not considered to belong to the coherent texts. On the line level, the proportions were even more drastic: in Swedish, 69% of the lines and in French 77% were marked as rejected. These findings suggest that the source texts may be too noisy to be used without further cleaning for many purposes and that the quality of the French CoNLL data is somewhat lower than that of the Swedish data. Moreover, the different distributions indicate that length is already a strong signal of the line belonging to the coherent text. This seems natural, as many of the short lines enumerating links are very short. In the raw text versions extracted from HTML, the proportion of words evaluated as not belonging to the coherent texts was 64% in French and 61% in Swedish. On the line level, the rejected proportions were approximately 90% for both languages. Thus, despite its issues, the CoNLL data is clearly cleaner and of better quality than text extracted directly from HTML. For Trafilatura, the proportions of rejected material were clearly lower than in the other settings. On the word level, the Swedish contained only 6% of rejections and the French 21%, while on the line level, the proportions were 16% and 45% (resp.). Text processed with Trafilatura is thus cleaner than the CoNLL data, and its use is motivated even if the CoNLL data has already gone through some cleaning. On the other hand, the Trafilatura cleaning process does also discard some parts of the raw text that were evaluated as belonging to the text. For Swedish, 5004 words -approximately 29% of accepted words in the raw text extracted from HTML -were deleted by Trafilatura. Similarly, in, French, 1696 accepted words, that is, 21%, were deleted. Thus, obtaining cleaner text in this way also has the downside of not acquiring all the text available. Whether this trade-off is acceptable is likely to depend on the purpose for which the text is processed.", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 81, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 106, "end": 113, "text": "Table 5", "ref_id": "TABREF7" }, { "start": 139, "end": 146, "text": "Table 6", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Text quality based on the manual annotations", "sec_num": "5.1." }, { "text": "The machine learning results are based on altogether 50+50 documents from the CoNLL data: 20+20 as described in Table 7 : Data statistics. Positives refer to the accepted lines annotated as part of the coherent texts, while negatives are the rejected lines annotated as undesired material. Table 4 and an additional set of 30+30 documents we annotated in order to guarantee high system performance. Table 7 summarizes the key statistics of the training, development, and test division of the data. We set machine learning method parameters in a monolingual setting by optimizing the hyperparameters for French and Swedish separately on the development subsets. For mBERT, we found the optimal hyperparameter settings to be largely in agreement across the two languages: both models use a maximum sequence length of 192, batch size of 16 and are trained for 6 epochs. The Swedish model was trained with a learning rate of 2.5e -6 and the French with 5.0e -6 . For fastText, we selected word n-grams up to length three and training for 30 epochs, initializing the word vectors randomly as pre-initialized vectors did not show a clear benefit in evaluation on the development data. The final evaluation results on the test sets are shown in Table 8 . Both fastText and mBERT clearly outperform the majority baseline, and mBERT achieves the best results for both languages, with a more notable advantage for the French data, reaching an accuracy of 85.62% for French and 81.64% for Swedish. Figure 2 shows the precision-recall curves for the two methods. We find that mBERT systematically outperforms fastText across the entire recall range for French, but dips below the precision of fastText for part of the scale for Swedish. We continued to explore whether training the betterperforming method, mBERT, on data combining annotations from both languages could further improve performance, evaluating on each language separately. The multilingual model was trained with the above settings, and the Table 9 : Cross-and multilingual classification accuracy with mBERT. Monolingual results are repeated for reference.", "cite_spans": [], "ref_spans": [ { "start": 112, "end": 119, "text": "Table 7", "ref_id": null }, { "start": 290, "end": 297, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 399, "end": 406, "text": "Table 7", "ref_id": null }, { "start": 1238, "end": 1245, "text": "Table 8", "ref_id": "TABREF11" }, { "start": 1487, "end": 1495, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 1995, "end": 2002, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Classifiers for further cleaning", "sec_num": "5.2." }, { "text": "learning rate of 2.5e -6 was found to perform best on the development set. Despite the increase in training data size, the multilingual model falls behind its monolingual counterparts by 2-3% points on the two languages (Table 9) . Finally, we assessed how well the monolingually trained classifiers perform in a zero-shot, cross-lingual learning setting, i.e., how well they can predict in a language not seen during fine-tuning. While we observed a 5% point drop for Swedish, the drop was 16% points for French (Table 9). Nevertheless, both models manage to outperform the majority baseline even in this setting. This is encouraging for the multilingual long-term objective of our project, as it shows that machine learning-based text cleaning is possible even without language-specific training data.", "cite_spans": [], "ref_spans": [ { "start": 220, "end": 229, "text": "(Table 9)", "ref_id": null } ], "eq_spans": [], "section": "Classifiers for further cleaning", "sec_num": "5.2." }, { "text": "Finally, we apply the developed classifiers to a large body of unannotated texts to further assess the ratio of clean text in the source data. In the French and Swedish CoNLL data, we randomly sample URLs from which we then extract the texts using Trafilatura. The process is continued until we reach 10,000 lines in each language. We classify these lines using the French and Swedish monolingually tuned mBERT models described above, and observe the class proportions as summarized in Table 10 . Both languages exhibit a similar distribution -about 27-29% of lines are accepted by the models -while in terms of number of words the ratios are close to the inverse. Somewhat less content is accepted for French than for Swedish, even though the class distribution in the training data was more skewed toward the negative class for Swedish. This supports our earlier finding that the French source data has a lower ratio of clean text than Swedish (Section 5.1.).", "cite_spans": [], "ref_spans": [ { "start": 486, "end": 494, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Large-scale identification of coherent text", "sec_num": "5.3." }, { "text": "French Swedish Lines 26.89% 29.48% Words 70.91% 71.47% Table 10 : Proportion of accepted text in Trafilatura output based on mBERT predictions.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 63, "text": "Table 10", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Large-scale identification of coherent text", "sec_num": "5.3." }, { "text": "The register-annotated datasets include 688 documents in French and 1085 in Swedish. The most frequent registers in these datasets as well as the frequencies of the additional flags are show in Tables 11 and 12 , and the proportions of the registers in the two languages are illustrated in Figure 3 . Although the rankings of the registers differ, the sets of the most frequent registers in the two languages are quite similar. In other words, similar registers seem to be the most frequent ones, and many of the registers described in the annotation scheme (Table 3) remain infrequent. Both languages include a large number of texts labeled as Description with intent to sell, News and Personal blog. Differences arise with Machine translation, Personal opinion blog and Encyclopedia article. The frequency of Machine translation is certainly a sign of its frequency on the Internet. For the other classes, the differences may reflect true languagespecific distributions of registers. These will be further examined in future work with more extensive datasets. Another interesting property in the annotations is that Informational persuasion is the only main register among the most frequent ones in both languages. Its frequency may reflect linguistic variation displayed within this register and the fact that documents within it are difficult to assign a specific category. Additionally, it is noteworthy that hybrid categories are relatively infrequent and do not show among the most frequent classes. The additional flags show the range of linguistic variation and textual composition displayed by the documents. Many of the flags reflect textual properties that can affect the modeling of the documents. Comments can be particularly frequent in some registers. In the analyzed data, this is the case with Swedish Opinion blog and Personal blog. Linguistically, they may be more conversational than the bodies of the texts, which motivates the annotation of the flag. Similarly, foreign language and generated text may be used in the text for instance in quotations. These are naturally very different from the language otherwise used in the documents. In our data, foreign language seems relatively infrequent, but generated text is flagged quite often. Its proportion can, however, decrease when the text cleaning process improves. Multiple texts and missing text, again, are frequent properties of web documents. For instance, a document from a news site may include many headlines and beginnings of the actual news articles, which are then fully displayed on a page of their own. The structuring of these texts may show also in their linguistic characteristics. In our annotation results, these properties are flagged in both languages with frequencies ranging between 0% and 39%. Similar to comments, the frequency of these flags can correlate with specific register classes. For instance, 25% of the French and 39% of the Swedish annotations in the News report class were flagged as multiple texts, while the frequency of this flag was 0% for the Discussion forum class in both languages. Finally, the flag untypical for the register reflects linguistic variation within register categories, and is used when the document differs from a typical example of its register. decisions if needed. In the annotations, this flag is marked for approximately 10% of the documents. In particular, the flag is frequent in the Swedish News report class with a proportion of 28%. This can be symptomatic of the range of linguistic variation within this register.", "cite_spans": [], "ref_spans": [ { "start": 194, "end": 210, "text": "Tables 11 and 12", "ref_id": "TABREF1" }, { "start": 290, "end": 298, "text": "Figure 3", "ref_id": "FIGREF1" }, { "start": 558, "end": 567, "text": "(Table 3)", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Register annotation results", "sec_num": "5.4." }, { "text": "The register annotation and the different flags are illustrated in Table 13 . The example text is annotated as belonging to the Review register. The text is taken from the middle of the original document which is a customer review in an online book store. The actual text is preceded and followed by automatically generated text that is frequent in this kinds of web documents: 'Add to cart' and 'More books on'. The text includes two separate reviews. The first one is present in its entirety, but the second review ends with . . . and continues on another page. These properties are described in the annotation by the additional flags.", "cite_spans": [], "ref_spans": [ { "start": 67, "end": 75, "text": "Table 13", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Register annotation results", "sec_num": "5.4." }, { "text": "In this study, we have explored the challenges in deriving clean, register-annotated texts from the web. Our starting points were the Swedish and French Common Crawl datasets gathered for the 2017 CoNLL shared task (Ginter et al., 2017) , and our approach consisted of three steps: the evaluation of the text quality in order to assess the benefit of boilerplate removal, the development of a classifier to further clean the texts, and the annotation of registers. First, we manually evaluated three versions of the data that had gone through different cleaning processes: CoNLL versions, raw text versions derived from HTML by stripping markup and cleaned versions extracted from HTML using the boilerplate removal system Trafilatura. The evaluation of the text quality showed that the use of boilerplate removal improves the text quality clearly, although the process also incorrectly rejects some parts belonging to the main text body. In our project, the trade-off -loosing a small proportion of coherent text while improving overall", "cite_spans": [ { "start": 215, "end": 236, "text": "(Ginter et al., 2017)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "6." }, { "text": "Original Swedish Translation L\u00e4gg i varukorg 'Add to cart' jag tyckte boken var fin med vackra bilder, 'i thought the book was nice with beautiful pictures, v\u00e4ntade mig dock mer lantlig k\u00e4nsla, vet ej varf\u00f6r fick bara however, I expected a more rustic feeling, don\u00e2t know why just got det intrycket med titeln men alla hem var moderna the impression from the title but all the homes were modern med stads k\u00e4nsla, inredda med vintage och antikviteter with a city-like feeling, decorated with vintage and antiquities ' Vartenda uppslag\u00e4r fantastiskt! En ren njutning som... 'Every page is fantastic! Pure pleasure that . . . ' Fler b\u00f6cker inom 'More books on' Table 13 : Swedish text example with English translations on the right. Register: Review; Additional flags: Generated text , Part of the text is missing .", "cite_spans": [], "ref_spans": [ { "start": 658, "end": 666, "text": "Table 13", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "6." }, { "text": "quality -is acceptable, as it does not reduce the size of the data substantially.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "6." }, { "text": "To facilitate further cleanup of the resulting texts, as a second step, we trained classifiers for distinguishing coherent text content from other, undesirable material. Monolingually fine-tuned Multilingual BERT models achieved the best results for both French and Swedish. Additionally, we tested multi-and cross-lingual settings to investigate to what extent the cleaning could be realized with a joint model or in a language not seen during training. Combining the languages during training in the multilingual setup performed well, but did not outperform the monolingual classifiers. The cross-lingual, zero-shot setting did perform above baseline, which indicates that further cleaning of the texts can be done (to some extent) in multilingual settings without the time-expensive annotation of data in each of the languages under study. This is very encouraging for our project. Finally, we examined the register annotations and the proportions of different registers in the two languages. This analysis showed that most of the documents belong to a relatively small set of the most frequent registers, although the annotation scheme does cover a wide range of registers and their combinations. Additionally, the sets of the most frequent registers are relatively similar in the two languages. This finding is also very encouraging for our future plans. Specifically, we intend to extend to a larger set of languages already covered in the CoNLL data. We will also experiment with the possibility of combining the line-wise estimates of text quality at the document level. Finally, we will continue the register annotations with the objective of being able to automatically attach detailed register information to all the data. We release the materials and methods introduced in this study under open licenses at https://github.com/ TurkuNLP/WAC-XII.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions and future work", "sec_num": "6." }, { "text": "https://github.com/adbar/trafilatura", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://pypi.org/project/jusText/ 4 https://trafilatura.readthedocs.io/en/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://trafilatura.readthedocs.io/en/ latest/evaluation.html", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The work was funded the Foundation of Emil Aaltonen. Computational resources for this work were provided by CSC -Finnish IT Center for Science.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Crowdsourcing for web genre annotation. Language Resources and Evaluation", "authors": [ { "first": "N", "middle": [ "R" ], "last": "Asheghi", "suffix": "" }, { "first": "S", "middle": [], "last": "Sharoff", "suffix": "" }, { "first": "K", "middle": [], "last": "Markert", "suffix": "" } ], "year": 2016, "venue": "", "volume": "50", "issue": "", "pages": "603--641", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asheghi, N. R., Sharoff, S., and Markert, K. (2016). Crowdsourcing for web genre annotation. Language Re- sources and Evaluation, 50(3):603-641.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "The wacky wide web: a collection of very large linguistically processed web-crawled corpora", "authors": [ { "first": "M", "middle": [], "last": "Baroni", "suffix": "" }, { "first": "S", "middle": [], "last": "Bernardini", "suffix": "" }, { "first": "A", "middle": [], "last": "Ferraresi", "suffix": "" }, { "first": "E", "middle": [], "last": "Zanchetta", "suffix": "" } ], "year": 2009, "venue": "Language Resources and Evaluation", "volume": "43", "issue": "3", "pages": "209--226", "other_ids": {}, "num": null, "urls": [], "raw_text": "Baroni, M., Bernardini, S., Ferraresi, A., and Zanchetta, E. (2009). The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation, 43(3):209-226, September.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Register variation online", "authors": [ { "first": "D", "middle": [], "last": "Biber", "suffix": "" }, { "first": "J", "middle": [], "last": "Egbert", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Biber, D. and Egbert, J. (2018). Register variation online. Cambridge University Press, Cambridge.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Corpus Linguistics: Investigating Language Structure and Use", "authors": [ { "first": "D", "middle": [], "last": "Biber", "suffix": "" }, { "first": "S", "middle": [], "last": "Conrad", "suffix": "" }, { "first": "R", "middle": [], "last": "Reppen", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Biber, D., Conrad, S., and Reppen, R. (1998). Corpus Linguistics: Investigating Language Structure and Use. Cambridge University Press, Cambridge.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Variation across speech and writing", "authors": [ { "first": "D", "middle": [], "last": "Biber", "suffix": "" } ], "year": 1988, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Biber, D. (1988). Variation across speech and writing. Cambridge University Press, Cambridge.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Enriching word vectors with subword information", "authors": [ { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. (2017). Enriching word vectors with subword informa- tion. Transactions of the Association for Computational Linguistics, 5:135\u00e2146, Dec.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "J", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "M.-W", "middle": [], "last": "Chang", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1810.04805" ] }, "num": null, "urls": [], "raw_text": "Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). BERT: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Developing a bottom-up, user-based method of web register classification", "authors": [ { "first": "J", "middle": [], "last": "Egbert", "suffix": "" }, { "first": "D", "middle": [], "last": "Biber", "suffix": "" }, { "first": "M", "middle": [], "last": "Davies", "suffix": "" } ], "year": 2015, "venue": "Journal of the Association for Information Science and Technology", "volume": "66", "issue": "9", "pages": "1817--1831", "other_ids": {}, "num": null, "urls": [], "raw_text": "Egbert, J., Biber, D., and Davies, M. (2015). Developing a bottom-up, user-based method of web register classifica- tion. Journal of the Association for Information Science and Technology, 66(9):1817-1831.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "CoNLL 2017 shared task -automatically annotated raw texts and word embeddings. LIN-DAT/CLARIN digital library at\u00daFAL", "authors": [ { "first": "F", "middle": [], "last": "Ginter", "suffix": "" }, { "first": "J", "middle": [], "last": "Haji\u010d", "suffix": "" }, { "first": "J", "middle": [], "last": "Luotolahti", "suffix": "" }, { "first": "M", "middle": [], "last": "Straka", "suffix": "" }, { "first": "D", "middle": [], "last": "Zeman", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ginter, F., Haji\u010d, J., Luotolahti, J., Straka, M., and Ze- man, D. (2017). CoNLL 2017 shared task -automati- cally annotated raw texts and word embeddings. LIN- DAT/CLARIN digital library at\u00daFAL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Learning word vectors for 157 languages", "authors": [ { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "P", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Grave, E., Bojanowski, P., Gupta, P., Joulin, A., and Mikolov, T. (2018). Learning word vectors for 157 languages. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Evalua- tion (LREC 2018), Miyazaki, Japan, May. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "English language series", "authors": [ { "first": "M", "middle": [ "A K" ], "last": "Halliday", "suffix": "" } ], "year": 1976, "venue": "", "volume": "9", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Halliday, M. A. K. (1976). Cohesion in English. English language series ; 9. Longman, London.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Bag of tricks for efficient text classification", "authors": [ { "first": "A", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "E", "middle": [], "last": "Grave", "suffix": "" }, { "first": "P", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joulin, A., Grave, E., Bojanowski, P., and Mikolov, T. (2016). Bag of tricks for efficient text classification. CoRR, abs/1607.01759.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Collection strategies and design decisions", "authors": [ { "first": "M", "middle": [], "last": "Kyt\u00f6", "suffix": "" }, { "first": "A", "middle": [], "last": "Ludeling", "suffix": "" } ], "year": 2008, "venue": "Corpus Linguistics: An International Handbook", "volume": "9", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kyt\u00f6, M. and Ludeling, A. (2008). Collection strategies and design decisions. In Corpus Linguistics: An Inter- national Handbook, chapter 9.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Toward multilingual identification of online registers", "authors": [ { "first": "V", "middle": [], "last": "Laippala", "suffix": "" }, { "first": "R", "middle": [], "last": "Kyll\u00f6nen", "suffix": "" }, { "first": "J", "middle": [], "last": "Egbert", "suffix": "" }, { "first": "D", "middle": [], "last": "Biber", "suffix": "" }, { "first": "S", "middle": [], "last": "Pyysalo", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 22nd Nordic Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "292--297", "other_ids": {}, "num": null, "urls": [], "raw_text": "Laippala, V., Kyll\u00f6nen, R., Egbert, J., Biber, D., and Pyysalo, S. (2019). Toward multilingual identification of online registers. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 292- 297, Turku, Finland, September-October. Link\u00f6ping University Electronic Press.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Corpus Linguistics", "authors": [ { "first": "T", "middle": [], "last": "Mcenery", "suffix": "" }, { "first": "A", "middle": [], "last": "Wilson", "suffix": "" } ], "year": 1996, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "McEnery, T. and Wilson, A. (1996). Corpus Linguistics. Edinburgh University Press, Edinburgh.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Efficient estimation of word representations in vector space", "authors": [ { "first": "T", "middle": [], "last": "Mikolov", "suffix": "" }, { "first": "K", "middle": [], "last": "Chen", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1301.3781" ] }, "num": null, "urls": [], "raw_text": "Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures", "authors": [], "year": null, "venue": "7th Workshop on the Challenges in the Management of Large Corpora", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Asynchronous Pipeline for Processing Huge Corpora on Medium to Low Resource Infrastructures. In Pi- otr Ba\u0144ski, et al., editors, 7th Workshop on the Chal- lenges in the Management of Large Corpora (CMLC-", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Leibniz-Institut f\u00fcr Deutsche Sprache", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": ", Cardiff, United Kingdom, July. Leibniz-Institut f\u00fcr Deutsche Sprache.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The good, the bad, and the hazy: Design decisions in web corpus construction", "authors": [ { "first": "R", "middle": [], "last": "Sch\u00e4fer", "suffix": "" }, { "first": "A", "middle": [], "last": "Barbaresi", "suffix": "" }, { "first": "F", "middle": [], "last": "Bildhauer", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the 8th Web as Corpus Workshop (WAC-8)", "volume": "", "issue": "", "pages": "7--15", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sch\u00e4fer, R., Barbaresi, A., and Bildhauer, F. (2013). The good, the bad, and the hazy: Design decisions in web corpus construction. In Stefan Evert, et al., editors, Pro- ceedings of the 8th Web as Corpus Workshop (WAC-8), pages 7-15, Lancaster. SIGWAC.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Accurate and efficient generalpurpose boilerplate detection for crawled web corpora. Language Resources and Evaluation", "authors": [ { "first": "R", "middle": [], "last": "Sch\u00e4fer", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sch\u00e4fer, R. (2016a). Accurate and efficient general- purpose boilerplate detection for crawled web corpora. Language Resources and Evaluation. online first.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Commoncow: Massively huge web corpora from commoncrawl data and a method to distribute them freely under restrictive eu copyright laws", "authors": [ { "first": "R", "middle": [], "last": "Sch\u00e4fer", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016)", "volume": "", "issue": "", "pages": "4500--4504", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sch\u00e4fer, R. (2016b). Commoncow: Massively huge web corpora from commoncrawl data and a method to dis- tribute them freely under restrictive eu copyright laws. In Nicoletta Calzolari (Conference Chair), et al., edi- tors, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), pages 4500-4504, Portoro\u00c5 3 4 , Slovenia. European Lan- guage Resources Association (ELRA).", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Proceedings of the 10th Web as Corpus Workshop, chapter On Bias-free Crawling and Representative Web Corpora", "authors": [ { "first": "R", "middle": [], "last": "Sch\u00e4fer", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "99--105", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sch\u00e4fer, R., (2016c). Proceedings of the 10th Web as Cor- pus Workshop, chapter On Bias-free Crawling and Rep- resentative Web Corpora, pages 99-105. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Functional text dimensions for the annotation of web corpora", "authors": [ { "first": "S", "middle": [], "last": "Sharoff", "suffix": "" } ], "year": 2018, "venue": "Corpora", "volume": "1", "issue": "13", "pages": "65--95", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sharoff, S. (2018). Functional text dimensions for the an- notation of web corpora. Corpora, 1(13):65-95.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Attention is all you need", "authors": [ { "first": "A", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "N", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "N", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "J", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "L", "middle": [], "last": "Jones", "suffix": "" }, { "first": "A", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "I", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., and Polosukhin, I. (2017). Attention is all you need. In Advances in neural infor- mation processing systems, pages 5998-6008.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "XLNet: Generalized autoregressive pretraining for language understanding", "authors": [ { "first": "Z", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Z", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Y", "middle": [], "last": "Yang", "suffix": "" }, { "first": "J", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "R", "middle": [], "last": "Salakhutdinov", "suffix": "" }, { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., and Le, Q. V. (2019). XLNet: Generalized autore- gressive pretraining for language understanding.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Precision-recall curves for the two machine learning methods. (x-axis: recall, y-axis: precision)", "num": null, "uris": null, "type_str": "figure" }, "FIGREF1": { "text": "Proportions of registers in the two languages.", "num": null, "uris": null, "type_str": "figure" }, "TABREF1": { "text": "", "num": null, "content": "
: Sizes of the deduplicated CoNLL 2017 Common
Crawl-based datasets for French and Swedish
in its documentation, 5 Trafilatura achieves an accuracy of
91% and outperforms a number of similar tools, including
jusText.
", "html": null, "type_str": "table" }, "TABREF2": { "text": "Florida och New York -Sveriges Kungahus 'Princess Madeleine visited Childhood projects in Florida and New York -The Swedish Royal Court' 0L\u00e4nk till sidan Anpassa webbplatsen 'Link to the site Customize the Web Site'", "num": null, "content": "
Label Text
1 Prinsessan Madeleine bes\u00f6kte Childhood-projekt i 0 L\u00e4nk till Startsidan 'Link to the home page'
", "html": null, "type_str": "table" }, "TABREF4": { "text": "Register classes in the taxonomy. Main register classes are shown in bold.", "num": null, "content": "", "html": null, "type_str": "table" }, "TABREF6": { "text": "Text quality for CoNLL source text.", "num": null, "content": "
FrenchAcceptReject
Words8097 (36%) 14662 (64%)
Lines408 (9%)4227 (91%)
SwedishAcceptReject
Words17228 (39%) 27324 (61%)
Lines568 (11%)4809 (89%)
", "html": null, "type_str": "table" }, "TABREF7": { "text": "Text quality for raw text.", "num": null, "content": "
FrenchAcceptReject
Words6401 (79%)1713 (21%)
Lines306 (55%)255 (45%)
SwedishAcceptReject
Words12224 (94%)794 (6%)
Lines403 (84%)77 (16%)
", "html": null, "type_str": "table" }, "TABREF8": { "text": "", "num": null, "content": "", "html": null, "type_str": "table" }, "TABREF11": { "text": "Monolingual classification accuracy.", "num": null, "content": "
Test
TrainFrench Swedish
French81.6476.61
Swedish69.7285.62
Fr + Sv79.3482.28
", "html": null, "type_str": "table" }, "TABREF12": { "text": "Indicating this helps to further analyze the annotation", "num": null, "content": "
Number of Comments Missing ForeignGenerated UntypicalMultiple
documentstextlanguage textfor register texts
Description with intent to sell 1360%10%2%13%3%10%
News report / news blog751%28%0%7%7%25%
Encyclopedia article450%18%0%22%7%2%
Description of a thing450%16%2%20%0%2%
Personal blog333%15%3%6%9%12%
Discussion forum330%0%0%33%12%0%
Reviews323%16%0%28%9%22%
How-to / instruction250%0%0%24%12%4%
Informational persuasion250%4%0%28%0%8%
", "html": null, "type_str": "table" }, "TABREF13": { "text": "Annotation statistics for the French data", "num": null, "content": "
Number of Comments Missing ForeignGenerated UntypicalMultiple
documentstextlanguage textfor register texts
Encyclopedia article2230%18%0%83%6%0.5%
Personal blog15732%8%6%31%2%9%
Description with intent to sell 1364%6%0%30%6%12%
News report / news blog1093%28%0%17%28%39%
Opinion blog4524%13%2%27%4%11%
MT /generated text378%3%8%11%16%22%
Description of a thing270%15%0%26%0%15%
Discussion forum205%0%5%35%15%0%
Informational persuasion190%11%0%32%0%16%
", "html": null, "type_str": "table" }, "TABREF14": { "text": "Annotation statistics for the Swedish data", "num": null, "content": "", "html": null, "type_str": "table" } } } }