|
{ |
|
"paper_id": "2020", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T04:33:24.479500Z" |
|
}, |
|
"title": "Out-of-the-Box and Into the Ditch? Multilingual Evaluation of Generic Text Extraction Tools", |
|
"authors": [ |
|
{ |
|
"first": "Adrien", |
|
"middle": [], |
|
"last": "Barbaresi", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Brandenburg Academy of Sciences", |
|
"location": {} |
|
}, |
|
"email": "[email protected]" |
|
}, |
|
{ |
|
"first": "Ga\u00ebl", |
|
"middle": [], |
|
"last": "Lejeune", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Sorbonne University J\u00e4gerstra\u00dfe", |
|
"location": { |
|
"addrLine": "1 rue Victor Cousin", |
|
"postCode": "22-23 10117, 75005", |
|
"settlement": "Berlin, Paris", |
|
"country": "Germany), France" |
|
} |
|
}, |
|
"email": "[email protected]" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This article examines extraction methods designed to retain the main text content of web pages and discusses how the extraction could be oriented and evaluated: can and should it be as generic as possible to ensure opportunistic corpus construction? The evaluation grounds on a comparative benchmark of open-source tools used on pages in five different languages (Chinese, English, Greek, Polish and Russian), it features several metrics to obtain more fine-grained differentiations. Our experiments highlight the diversity of web page layouts across languages or publishing countries. These discrepancies are reflected by diverging performances so that the right tool has to be chosen accordingly.", |
|
"pdf_parse": { |
|
"paper_id": "2020", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This article examines extraction methods designed to retain the main text content of web pages and discusses how the extraction could be oriented and evaluated: can and should it be as generic as possible to ensure opportunistic corpus construction? The evaluation grounds on a comparative benchmark of open-source tools used on pages in five different languages (Chinese, English, Greek, Polish and Russian), it features several metrics to obtain more fine-grained differentiations. Our experiments highlight the diversity of web page layouts across languages or publishing countries. These discrepancies are reflected by diverging performances so that the right tool has to be chosen accordingly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "1. Introduction", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Large \"offline\" web corpora are now standard throughout disciplines among the research community. Corpus construction notably involves \"crawling, downloading, 'cleaning' and de-duplicating the data, then linguistically annotating it and loading it into a corpus query tool.\" (Kilgarriff, 2007) Although text is ubiquitous on the Web, extracting information from web pages can prove to be difficult. They come in different shapes and sizes mostly because of the wide variety of platforms and content management systems, and not least depending on the context, for instance diverging goals followed during publication. This process involves a significant number of design decisions and turning points in data processing. Depending on the purpose of data collection, a substantial filtering and quality assessment can be crucial. Recently, approaches using the CommonCrawl 1 have flourished as they allow for faster download and processing by skipping (or more precisely outsourcing) the crawling phase (Habernal et al., 2016; Sch\u00e4fer, 2016) . Barring the fact that finding one's \"own\" way through the Web can be preferable, it is clear that such data should not be used without some filtering. Beside the discovery of relevant websites, a major issue consist in selecting appropriate content after download and processing (Sch\u00e4fer et al., 2013) , which may not be straightforward due to unexpected or machinegenerated flaws and biases. Some large-scale algorithms can be expected to smooth out irregularities. However, uses requiring a low margin of error and close reading approaches imply constant refinements in the constitution and processing of the dataset, for example in the context of an aggregated lexical information platform (Geyken et al., 2017) . The potential lack of metadata is worsened by a lack of information regarding the content whose adequacy, focus and quality are the object of a post hoc evaluation (Baroni et al., 2009) . A major challenge lies in the ability to extract and 1 https://commoncrawl.org pre-process web data to meet scientific expectations with respect to corpus quality (Barbaresi, 2015) . Because of the vastly increasing variety of corpora, text types and use cases, it becomes more and more difficult to assess the usefulness and appropriateness of the gathered web texts for given research objectives. Potential answers can reside in methods such as focused web crawling for corpus construction (Sch\u00e4fer et al., 2014) and in a degree of focus concerning the selection of sources (Barbaresi, 2016; Barbaresi, 2019) . Regardless of the chosen construction method, an essential operation consists in retaining the desired content while discarding the rest, a polyonymous goal referring to peculiar subtasks or to the whole, most notably web scraping, boilerplate removal, web page segmentation, web page cleaning, or content extraction (Lejeune and Zhu, 2018) . The variety of contexts and text genres leads to important design decisions during the collection of texts: could and should the tooling be adapted to particular sources that are targeted (which often amounts to the development of web scraping tools e.g. for news outlets) or should the extraction be as generic as possible to provide opportunistic ways of gathering information? Due to a lack of time resources in academia and elsewhere, the tools are considered as fieldtested without a thorough evaluation in vitro. This article hopefully makes a step towards the latter.", |
|
"cite_spans": [ |
|
{ |
|
"start": 275, |
|
"end": 293, |
|
"text": "(Kilgarriff, 2007)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 1000, |
|
"end": 1023, |
|
"text": "(Habernal et al., 2016;", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1024, |
|
"end": 1038, |
|
"text": "Sch\u00e4fer, 2016)", |
|
"ref_id": "BIBREF34" |
|
}, |
|
{ |
|
"start": 1320, |
|
"end": 1342, |
|
"text": "(Sch\u00e4fer et al., 2013)", |
|
"ref_id": "BIBREF32" |
|
}, |
|
{ |
|
"start": 1734, |
|
"end": 1755, |
|
"text": "(Geyken et al., 2017)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 1922, |
|
"end": 1943, |
|
"text": "(Baroni et al., 2009)", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 2109, |
|
"end": 2126, |
|
"text": "(Barbaresi, 2015)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 2438, |
|
"end": 2460, |
|
"text": "(Sch\u00e4fer et al., 2014)", |
|
"ref_id": "BIBREF33" |
|
}, |
|
{ |
|
"start": 2522, |
|
"end": 2539, |
|
"text": "(Barbaresi, 2016;", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 2540, |
|
"end": 2556, |
|
"text": "Barbaresi, 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 2876, |
|
"end": 2899, |
|
"text": "(Lejeune and Zhu, 2018)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Web corpus construction", |
|
"sec_num": "1.1." |
|
}, |
|
{ |
|
"text": "As the use of templates is pervasive on the Web ( Bar-Yossef and Rajagopalan, 2002) , common approaches to main content detection include heuristic rules, machine learning on labeled training data, and indirectly template-based approaches (for example by identifying duplicated content) (Rae et al., 2018) . Although text-based (Kohlsch\u00fctter and Nejdl, 2008) and visual segmentation algorithms (Cai et al., 2003) have been published on, content extraction mostly draws on Document Object Model (DOM) examination (Gupta et al., 2003) . That means considering a given HTML document as a tree structure whose nodes represent parts of the document to be operated on.", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 83, |
|
"text": "Bar-Yossef and Rajagopalan, 2002)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 287, |
|
"end": 305, |
|
"text": "(Rae et al., 2018)", |
|
"ref_id": "BIBREF30" |
|
}, |
|
{ |
|
"start": 328, |
|
"end": 358, |
|
"text": "(Kohlsch\u00fctter and Nejdl, 2008)", |
|
"ref_id": "BIBREF21" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 412, |
|
"text": "(Cai et al., 2003)", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 512, |
|
"end": 532, |
|
"text": "(Gupta et al., 2003)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State of the art of content extraction", |
|
"sec_num": "1.2." |
|
}, |
|
{ |
|
"text": "Text, tag and/or link density have proven to be good heuristics in order to select or discard content nodes, with approaches such as the Content Extraction via Tag Ratios (CETR) (Weninger et al., 2010) or the Content Extraction via Text Density (CETD) algorithms (Sun et al., 2011) . Statistical selection of informative nodes through a combination of both methods proved more efficient on comparable datasets (Qureshi and Memon, 2012) . Indeed, the large majority of DOM-based approaches try to leverage semantic information conveyed by HTML tags, notably paragraphs (p) on which text-to-tag ratios are calculated (Carey and Manic, 2016) . An earlier, language-independent approach uses entropy measures applied to feature, links, and content in order to discriminate among parts of a webpage (Kao et al., 2004) . Machine learning approaches have also been used, whose interest generally consists in leveraging advances in classification tasks by treating a HTML document as a series of blocks to be classified. Relevant algorithms notably include conditional random fields (CRF) learning header, text or noisy blocks using markup-based, content-based, and document-related features (Spousta et al., 2008) , support vector machines (SVMs) trained on linguistic, structural and visual features (Bauer et al., 2007) , or more recently deep learning, for example with convolutional neural networks (CNNs) learning combinations of DOM-based features (Vogels et al., 2018) . Regarding the evaluation of extraction methods, the Cleaneval dataset and metrics (Baroni et al., 2008) have been used as a reference by numerous studies. Granularity and metrics used can have a real impact on results. Character and word-level metrics can be considered as a sequence, in a bag of words approach, or as a set and then ranked by F-score (Gottron, 2007) . Web text extraction is not a solved task, user experience in general turns web content extraction into an active field of research, resulting from higher download and rendering speeds overall as well as from a growing tendency to inject content from a wide variety of sources, notably through the development of \"reader modes\" and \"distillers\" 2 for web browsers which strive to reduce the amount of \"Web bloat\" (Ghasemisharif et al., 2019) . Furthermore, many existing algorithms have become somewhat obsolete due to the rapid changes in web technologies over the last 15 years (Weninger et al., 2016) . Web page structure is also constantly evolving from the perspective of standards. HTML 5 was first released in 2008 to provide support for multimedia and graphical elements. This standard also streamlined syntax while retaining backward-compatibility. It also provided ways to tag the semantic content of documents with a granularity unseen before, with new page structure elements such as main, section, article, header, footer, aside, or nav. The standard has been gradually integrated into publishing practices and content management systems, while the recommendations still evolve, the current standard being HTML 5.2. 3 In addition, publication systems combining HTML code with embedded JavaScript are on the rise, which also raises the question of \"dry\" and rendered page code. Last, there is a disciplinary gap between computer scientists and corpus linguists, both at the time of and following the \"web as corpus\" paradigm. As well as other research traditions sharing the Web as a research object without communicating much (Br\u00fcgger and Laursen, 2019) , both communities do not seem to be interconnected, although they could benefit from each other's results. We believe content extraction does not get the amount of attention it deserves in the corpus linguistics community. Additionally, precise metadata extraction is paramount in the humanities and remains a collateral issue of this disciplinary gap.", |
|
"cite_spans": [ |
|
{ |
|
"start": 178, |
|
"end": 201, |
|
"text": "(Weninger et al., 2010)", |
|
"ref_id": "BIBREF38" |
|
}, |
|
{ |
|
"start": 263, |
|
"end": 281, |
|
"text": "(Sun et al., 2011)", |
|
"ref_id": "BIBREF36" |
|
}, |
|
{ |
|
"start": 410, |
|
"end": 435, |
|
"text": "(Qureshi and Memon, 2012)", |
|
"ref_id": "BIBREF29" |
|
}, |
|
{ |
|
"start": 615, |
|
"end": 638, |
|
"text": "(Carey and Manic, 2016)", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 794, |
|
"end": 812, |
|
"text": "(Kao et al., 2004)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1184, |
|
"end": 1206, |
|
"text": "(Spousta et al., 2008)", |
|
"ref_id": "BIBREF35" |
|
}, |
|
{ |
|
"start": 1294, |
|
"end": 1314, |
|
"text": "(Bauer et al., 2007)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 1447, |
|
"end": 1468, |
|
"text": "(Vogels et al., 2018)", |
|
"ref_id": "BIBREF37" |
|
}, |
|
{ |
|
"start": 1553, |
|
"end": 1574, |
|
"text": "(Baroni et al., 2008)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 1823, |
|
"end": 1838, |
|
"text": "(Gottron, 2007)", |
|
"ref_id": "BIBREF13" |
|
}, |
|
{ |
|
"start": 2253, |
|
"end": 2281, |
|
"text": "(Ghasemisharif et al., 2019)", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 2420, |
|
"end": 2443, |
|
"text": "(Weninger et al., 2016)", |
|
"ref_id": "BIBREF39" |
|
}, |
|
{ |
|
"start": 3069, |
|
"end": 3070, |
|
"text": "3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 3479, |
|
"end": 3506, |
|
"text": "(Br\u00fcgger and Laursen, 2019)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "State of the art of content extraction", |
|
"sec_num": "1.2." |
|
}, |
|
{ |
|
"text": "Distinguishing between whole page and essential parts can help to alleviate many quality problems related to web texts. While this is particularly useful in the case of deduplication and studies relying on frequency-based information, other tasks related to content extraction also benefit from a cleaner text base. In the concrete case of linguistic and lexicographic research, it allows for content checks on the only portion of the document that really counts. In the following, we describe and evaluate text extraction tools published under open-source licenses and whose installation is straightforward. We perform a comparative benchmark on a multilingual setting consisting of realworld data with a manually annotated gold standard. We discuss the results as well as potentially suitable metrics to obtain more fine-grained differentiation. The insights of this paper are thus threefold in terms of software usability, benchmarking, and metrics.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Contributions", |
|
"sec_num": "1.3." |
|
}, |
|
{ |
|
"text": "The evaluation described here focuses on integration and real-world usability of the tested solutions. As in previous evaluation campaigns we target the main content, which is usually the part displayed centrally, without the left or right bars, the header or the footer, but including potential titles and comments. We gathered tools coming from different research and industrial backgrounds, different countries, and developed during different time frames.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation method", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The current benchmark focuses on the Python programming language which is reportedly the most popular programming language in academia 4 and one of the most popular overall. A few algorithms below are adapted from other languages such as Java and JavaScript, which contributes to giving an exhaustive yet incomplete panorama of available solutions overall. The following tools keep the structure intact but don't focus on main text extraction, they are kept in the benchmark to see how they perform in terms of recall, that is in order to measure how easy it would be to simply gather all the extractable text:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 HTML2TEXT 5 performs text extraction", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 INSCRIPTIS 6 converts HTML to text with a particular emphasis on nested tables.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "The following tools focus on main text extraction which is the task at hand:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 BOILERPY3 7 is a Python version of the boilerpipe algorithm (Kohlsch\u00fctter et al., 2010) for boilerplate removal and fulltext extraction;", |
|
"cite_spans": [ |
|
{ |
|
"start": 62, |
|
"end": 89, |
|
"text": "(Kohlsch\u00fctter et al., 2010)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 DRAGNET 8 works as a meta-classifier using different methods weighted by machine learning (Peters and Lecocq, 2013) , it requires more dependencies and potentially fine-tuning or re-training to work at its best;", |
|
"cite_spans": [ |
|
{ |
|
"start": 92, |
|
"end": 117, |
|
"text": "(Peters and Lecocq, 2013)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 GOOSE3 9 can extract information for embedded content but doesn't preserve markup;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 JUSTEXT 10 is designed to preserve mainly text containing full sentences along with some markup, it has been explicitly developed to create linguistic resources (Pomik\u00e1lek, 2011) ;", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 180, |
|
"text": "(Pomik\u00e1lek, 2011)", |
|
"ref_id": "BIBREF28" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 NEWSPAPER 11 is mostly geared towards newspaper texts, provides additional functions but no structured text or comment extraction", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 NEWS-PLEASE 12 is a news crawler that extracts structured information (Hamborg et al., 2017) ;", |
|
"cite_spans": [ |
|
{ |
|
"start": 72, |
|
"end": 94, |
|
"text": "(Hamborg et al., 2017)", |
|
"ref_id": "BIBREF18" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 PYTHON-READABILITY 13 is a Python port of the Readability library used in Firefox to display distraction-free webpages, it cleans the page and preserves some markup.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "The systems are used out-of-the-box or with minimal finetuning. Some of them come from an academic and others from an engineering or commercial background. Some are not being actively developed while others are still being updated. There is no reason to believe some would be disadvantaged as the pages they are tested on are anterior to their development. We use different pre-tuned configurations (here after mode) for the tools that offer this possibility: BOILERPY3 and JUSTEXT. All the code developed for this evaluations is available online. 14 In the results section we will use the following names for the tools:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 BP3 for BOILERPY3 (default configuration) BP3 Art for the Article mode, BP3 KeepE for the KeepEverything mode and BP3 Larg for the Largest mode;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "5 https://github.com/Alir3z4/html2text/ 6 https://github.com/weblyzard/inscriptis 7 https://github.com/jmriebold/BoilerPy3 8 https://github.com/dragnet-org/dragnet 9 https://github.com/goose3/goose3 10 https://github.com/miso-belica/jusText 11 https://github.com/codelucas/newspaper 12 https://github.com/fhamborg/news-please 13 https://github.com/buriy/python-readability 14 https://github.com/rundimeco/waddle \u2022 GOOSE for GOOSE3;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 JT for JUSTEXT (default configuration), JT en for the English mode and JT langid for the language dependent mode;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 NPAPER for NEWSPAPER;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 NPLEASE for NEWS-PLEASE;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "\u2022 READ for Python-Readability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Tested solutions", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "For our experiments we take advantage of the multilingual, human-annotated corpus DAnIEL, used previously for segmentation and event detection tasks (Lejeune et al., 2012) and extraction (Lejeune and Zhu, 2018) . It comprises 1694 documents in five languages: Chinese, English, Greek, Polish and Russian. Each document is present as in its original HTML version and as a cleaned version with the text and some markup. To the best of our knowledge it is the largest multilingual corpus for evaluating web content extraction tools.", |
|
"cite_spans": [ |
|
{ |
|
"start": 149, |
|
"end": 171, |
|
"text": "(Lejeune et al., 2012)", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 187, |
|
"end": 210, |
|
"text": "(Lejeune and Zhu, 2018)", |
|
"ref_id": "BIBREF24" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "The documents have been collected in 2011 and 2012 to evaluate a text classification tool. The HTML 5 standard was not published as a W3C recommendation before 2014, thus it is to be expected that the documents analyzed here almost exclusively ground on HTML 4 which has been a reference since the end of the 1990s. We wish to compare the results of extrinsic evaluation (e.g. how does the web cleaning tool influence the result of classification) and intrinsic evaluation, e.g. to what extent the extracted content matches the expected outcome. We focus on the latter, not only to find the potentially \"best\" solution but also to provide more insights on the metrics and results of the evaluation. The dataset is available upon request. Table 1 shows some statistics on the corpus, the HTML original files and the manually curated clean versions. We can see two different families of tools:", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 738, |
|
"end": 745, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "\u2022 Recall oriented tools such as HTML2TEXT, INSCRIP-TIS and BP3 KEEPE: they tend to extract much more data than expected", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Corpus", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "\u2022 Precision-oriented tools (all the others) which are really devoted to avoid noise. Table 2 and Table 3 show statistical descriptions of the output for all the tools, as we are looking for misses or near misses. We define almost empty documents as cases where the size of the output represents less than 10% of the size of the clean document. It shows how many times one can Table 3 : Proportion of empty or almost empty (< 10% of the expected size) files for each language be sure that the output clearly does not fit the result one can expect from a text extractor. Obviously, the three tools of the recall-oriented family seldom output empty or almost empty files. Most tools seem to be primarily designed for English and not well-adapted to Chinese. We can see the importance of the JUSTEXT language models when compared to the English mode (JT EN). But the default configuration performs well, except in Chinese for which we had to adapt the configuration 15 . Because of differences in both data sample and processing it is important to choose appropriate metrics which can highlight disparities in tool efficiency. The metrics are described and discussed in the following section.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 85, |
|
"end": 104, |
|
"text": "Table 2 and Table 3", |
|
"ref_id": "TABREF3" |
|
}, |
|
{ |
|
"start": 376, |
|
"end": 383, |
|
"text": "Table 3", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Corpus", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "We present in Table 4 the processing time for each tool. There are noticeable differences between them, partly due to the fact that some tools go far beyond a mere text extraction, most notably NEWS-PLEASE. We included this information as it needs to be taken into account for users that 15 We followed the recommendations from the author: https://github.com/miso-belica/jusText/issues/12. need to process data in real time or to clean big datasets but we won't discuss it thoroughly. We can see that DRAGNET and INSCRIPTIS seem to be the fastest systems, whereas language settings for JUSTEXT affect the results significantly.", |
|
"cite_spans": [ |
|
{ |
|
"start": 288, |
|
"end": 290, |
|
"text": "15", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 21, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Processing time", |
|
"sec_num": "3.1." |
|
}, |
|
{ |
|
"text": "Since the CLEANEVAL campaigns (Baroni et al., 2008) , a state-of-the-art evaluation scheme has been set up and accepted by the community. This metric is based on the following assumption: a text is a sequence of tokens with or without HTML tags and a good content extraction solution should preserve this sequence. The proposition consists in matching the longest common subsequence between a gold standard version of the text and the result given by an extractor. While there are still unmatched zones, the algorithm recursively finds the next longest common subsequence in these zones. The insertion of a sequence not present in the Gold Standard is a False Positive. Conversely, a sequence that is missing in the result of the extractor is a False Negative. This proved to be convenient since classical metrics like recall, precision and f-score can be computed. However, this metric has some flaws. First of all, it has a quadratic complexity due to the use of the Ratcliff/Obershelp algorithm (Ratcliff and Metzener, 1988) . Even on small datasets it is very slow. Secondly, it does not account properly for recall. For instance, copy-pasting the whole content of the document (e.g. with a very naive html-to-text tool) does not achieve 100% recall. As a consequence, we propose to use three additional metrics. Let GT be the Ground Truth and RES be the result of a given extractor and GT tok and RES tok be the sequence of their tokens. Let T P be the number of True Positives, F P the number of False Positives and F N the number of False Negatives. In order to favor comparisons, the tokenization is produced by the exact same code as in CLEANEVAL except for Chinese where a segmentation in characters has been performed. 16 The first one, voc eval, simply compares the vocabulary of GT and RE:", |
|
"cite_spans": [ |
|
{ |
|
"start": 30, |
|
"end": 51, |
|
"text": "(Baroni et al., 2008)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 998, |
|
"end": 1027, |
|
"text": "(Ratcliff and Metzener, 1988)", |
|
"ref_id": "BIBREF31" |
|
}, |
|
{ |
|
"start": 1730, |
|
"end": 1732, |
|
"text": "16", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "\u2022 Let GT voc be the set of GT tok and RES voc the set of RES tok", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "\u2022 TP = |GT voc \u2229 RES voc | \u2022 FP = |RES voc \\ GT voc | \u2022 FN = |GT voc \\ SET voc |", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "The second one, occ eval compares the number of occurrences for each token.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "\u2022 For each token t in GT tok :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "-T P = 0, F P = 0, F N = 0 -Compute f req(t GT ) (resp. f req(t RES )) its frequency in GT (resp. in RES)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "-TP += min(f req(t RES ), f req(t GT ) -FP += f req(t RES ) \u2212 T P -FN += f req(t GT ) \u2212 T P", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "\u2022 For each token u of RES voc \\ GT voc :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "-FP += f req(t RES )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "We also wish to apply other indicators in order to make other types of differences visible among all the tested tools. As such, we opt for two metrics: cosine and euclidean distance. These distances are regularly used for assessing the closeness between two documents (Platt et al., 2010; Buck and Koehn, 2016) , therefore we thought it could yield useful insights in this context. The last one (KL eval) uses the Kullback-Leibler divergence (a measure of relative entropy between two probability distributions):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "\u2022 V OC = GT voc \u222a RES tok (union of the vocabularies of GT and RES)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "\u2022 Let P gt (resp. P res ) be the probability distribution in GT (resp. RES) of each token of V OC", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "\u2022 for all x in P gt (resp. P res ):", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "if P gt (x) = 0 (resp.P res (x) = 0) * P gt (x) \u2190 10 \u22125 (resp.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "P res (x) \u2190 10 \u22125 ) \u2022 D KL (P g P res ) = \u2212 x\u2208X P (x) log Pg(x) Pres(x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "The Kullback-Leibler divergence is not a distance metric since it is not symmetric but it is a way to measure how probability distributions diverge. In our case, we do not need a symmetric measure since we just want to account for the closeness with the GT probability distribution. The first two metrics allow us to compute recall, precision and f-score whereas KL eval yields a single measure: the smaller the divergence, the greater the similarity of the two documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "16 See https://github.com/rundimeco/waddle Table 5 lists the results of each tool on the clean-eval evaluation scheme. The precision and recall are means, which is important for the interpretation since documentwise evaluation tends to favor systems that do not yield results much smaller that expected. The f-score is the classical version (with \u03b2 = 1) computed on the mean precision and mean recall. We could also have chosen to compute a mean of the different f-scores but decided it would be strange to have a geometric mean of harmonic means. The first thing we can see is that BP3 is very efficient. READABILITY offers a slightly worse result but with a higher recall whereas JUSTEXT exhibits a drop in recall in comparison. DRAGNET has the highest precision score but with a recall below 60%. The recall-oriented tool family leads to lower scores but we can see that INSCRIPTIS is better than HTML2TEXT in both recall and precision. It seems to be a good tool for task when it is important to get as much content as possible. The clean-eval measures for the quality of web page cleaning is widely used but it uses a convoluted algorithm relying on the alignment of sequences of words. Its rationale is quite straightforward: nobody wants to have a discontinuous version of the data or to have words in the wrong order. But it appears that in HTML code, the sequence of text blocks is in the same order as the original text. One can see there is not much difference between this evaluation and occ eval (Table 7) . There are some differences in ranking concerning the voc eval metric (Table 6 . Therefore, we can say that we can use the occ eval metric which has the advantage of being around ten times faster to compute. Table 8 shows the evaluation with cosine distance, euclidean distance and Kullback-Leibler divergence. Interestingly, this metric seems to be able to highlight systems that show a good balance between silence and noise (like READABILITY and JUSTEXT). Moreover, it does not penalize much systems with large recall scores (like INSCRIP-TIS or HTML2TEXT). This is not surprising since, even with smoothing, this measure tends to favor close probabilities in the same order of magnitude, in other words P (x) = 1 * 10 \u22124 is closer to Q(x) = 3 * 10 \u22124 than R(x) = 1 * 10 \u22125 .", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 43, |
|
"end": 50, |
|
"text": "Table 5", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 1509, |
|
"end": 1518, |
|
"text": "(Table 7)", |
|
"ref_id": "TABREF10" |
|
}, |
|
{ |
|
"start": 1590, |
|
"end": 1599, |
|
"text": "(Table 6", |
|
"ref_id": "TABREF9" |
|
}, |
|
{ |
|
"start": 1729, |
|
"end": 1736, |
|
"text": "Table 8", |
|
"ref_id": "TABREF12" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Evaluation Metrics", |
|
"sec_num": "3.2." |
|
}, |
|
{ |
|
"text": "The results on the five languages of the corpus describe major discrepancies between the tools. First of all, Table 9 shows the results obtained on English documents with the clean-eval metric and Table 10 the results for the occ eval metric. Again, we can see that occ eval yields comparable results. Since it is a simpler measure we will focus on this one for the remainder of the article.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 110, |
|
"end": 117, |
|
"text": "Table 9", |
|
"ref_id": "TABREF13" |
|
}, |
|
{ |
|
"start": 197, |
|
"end": 205, |
|
"text": "Table 10", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results by language", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "One can see that the scores are much higher than the scores showed in Tables 5 and 7 , which highlights that English is a very specific case. Our results demonstrate that most tools are primarily designed to process English documents. Furthermore, the tools that perform very well in this subcorpus are not as efficient on the multilingual corpus. So, one cannot rely on results evaluated solely on English to draw conclusions on the efficiency of a tool in real-world multilingual settings. Except the three recall-oriented tools, all yield an : Evaluation with the clean-eval metric (documents in English) , sorted by descending f-score occ eval f-score of 80% and higher. NEWSPAPER outperforms the other tools with an f-score above 90%. GOOSE is slightly below and close to NEWSPLEASE but it is much faster (around 35 times according to Table 4 ). The three tools designed for readability (READABILITY itself but also NEWSPAPER and NEWS-PLEASE) all perform very well. Table 11 introduces the results on the Greek subcorpus. The three best tools perform comparably to the three top tools for English. It is interesting to see that the languagedependent JUSTEXT configuration yields results comparable to the default configuration. NEWSPAPER, GOOSE and obviously JT EN perform poorly on this subcorpus. It is obvious for the latter but it is astonishing that the other two do not perform well. Table 12 shows the results obtained on the Polish subcorpus. We can see that the results are much lower than in English and Greek, both in terms of precision and recall. The best performers on the English subcorpus do not offer comparable results except for NEWSPLEASE andJUSTEXT. It seems harder to extract text from Russian pages since no system is able to achieve above 80% f-score (Table 13) . Again, JUSTEXT is among the best performers. Contrary to the Polish subcorpus, it is BP3 Larg that is the best BP3 configuration. We can see again that READABILITY performs very well on other languages than English. Finally, the worst results are related to the Chinese subcorpus (Table 14) . BP3 outperforms the rest of the field by far. One can see that the choice of a tool is much more important for Chinese than for English since many tools result in f-scores below 20%. We can note that it is the only language for which INSCRIPTIS does not achieve 90% recall.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 84, |
|
"text": "Tables 5 and 7", |
|
"ref_id": "TABREF7" |
|
}, |
|
{ |
|
"start": 840, |
|
"end": 847, |
|
"text": "Table 4", |
|
"ref_id": "TABREF5" |
|
}, |
|
{ |
|
"start": 971, |
|
"end": 979, |
|
"text": "Table 11", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1395, |
|
"end": 1403, |
|
"text": "Table 12", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 1780, |
|
"end": 1790, |
|
"text": "(Table 13)", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 2073, |
|
"end": 2083, |
|
"text": "(Table 14)", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results by language", |
|
"sec_num": "3.4." |
|
}, |
|
{ |
|
"text": "The results we presented yield differentiated insights so that it is difficult to give a definitive and universal answer. First of all, if one targets recall and/or speed INSCRIPTIS is clearly the most efficient solution. In general BP3 and READABILITY are the most stable systems and the only ones that perform reasonably well for Chinese.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Is there a winner?", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "If we do not consider Chinese, JUSTEXT in its languageindependent setting seems to be the most efficient solution for multilingual corpora. That being said, this setting is much slower and it is not strictly comparable as it uses additional information but most of all it does not appear to perform better. For texts in English GOOSE and NEWSPA-PER outperform the other systems. For Polish, BP3 ART shows a comparable f-score than JUSTEXT but with a better precision. For Russian BP3 LARG is a good solution if one needs precision but JUSTEXT achieves a satisfying trade-off between precision and recall. According to our study, there appears to be no benefit from more intricate machine-learning approaches, DRAGNET does not stand out and does not perform poorly either. However, the amount of additional training data needed to potentially improve its results is a penalty in terms of usability compared to the other solutions for which parameter tuning could lead to improvements much faster. JUSTEXT is such an example where changing settings can be done easily.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Is there a winner?", |
|
"sec_num": "3.5." |
|
}, |
|
{ |
|
"text": "The article focused on a comparative benchmark of opensource tools used on web documents from 2011 and 2012 written in five different languages, along with a discussion of suitable metrics. Content processing is affected by both diatopic and diachronic factors, whereas vocabulary analysis and distance metrics can yield more fine-grained information which complements the CLEANEVAL evaluation standard. Rule-based approaches appear to be more efficient in the long run, all the more since they are both easier to use and to parametrize. Most tools are developed with particular page styles in mind, mostly from the English-speaking world. Our data shows that linguistic factors are most probably reflected in HTML structures, which deeply affects extraction processes. The experiments above highlight the diversity of layouts and web coding practices depending on language and most probably on the country from which a document is published. These discrepancies are reflected by diverging performances so that the right tool has to be chosen accordingly.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and outlook", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "In addition, different eras of web development result in diverging \"HTMLects\". Our corpus provides a snapshot of a past version of the Web which proves to be challenging for some tools. As such, it is useful to assess how data from Web archives can be processed. These findings prompt for further studies on the evaluation of tool robustness with respect to the ever-changing Web. We have reasons to believe that the success of standardized publishing platforms and the consecutive advent of HTML 5 changes the way text is published on the Web, all of which could pave the way for further examinations.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusions and outlook", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "https://chromium.googlesource.com/chromium/dom-distiller 3 https://www.w3.org/TR/2017/REC-html52-20171214/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "https://spectrum.ieee.org/computing/software/the-topprogramming-languages-2019", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Template Detection via Data Mining and its Applications", |
|
"authors": [ |
|
{ |
|
"first": "Z", |
|
"middle": [], |
|
"last": "Bar-Yossef", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Rajagopalan", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proceedings of the 11th International Conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "580--591", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bar-Yossef, Z. and Rajagopalan, S. (2002). Template De- tection via Data Mining and its Applications. In Pro- ceedings of the 11th International Conference on World Wide Web, pages 580-591.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Ad hoc and general-purpose corpus construction from web sources", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Barbaresi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2015, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbaresi, A. (2015). Ad hoc and general-purpose corpus construction from web sources. Ph.D. thesis,\u00c9cole Nor- male Sup\u00e9rieure de Lyon.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Efficient construction of metadataenhanced web corpora", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Barbaresi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th Web as Corpus Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--16", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbaresi, A. (2016). Efficient construction of metadata- enhanced web corpora. In Paul Cook, et al., editors, Pro- ceedings of the 10th Web as Corpus Workshop, pages 7- 16. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The Vast and the Focused: On the need for thematic web and blog corpora", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Barbaresi", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the CMLC-7 workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "29--32", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Barbaresi, A. (2019). The Vast and the Focused: On the need for thematic web and blog corpora. In Piotr Ba\u0144ski, et al., editors, Proceedings of the CMLC-7 workshop, pages 29-32.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Cleaneval: a Competition for Cleaning Web Pages", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Chantree", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Sharoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of LREC", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "638--643", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baroni, M., Chantree, F., Kilgarriff, A., and Sharoff, S. (2008). Cleaneval: a Competition for Cleaning Web Pages. In Proceedings of LREC, pages 638-643. ELRA.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "The WaCky Wide Web: a collection of very large linguistically processed web-crawled corpora", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Baroni", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Bernardini", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Ferraresi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Zanchetta", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Language Resources and Evaluation", |
|
"volume": "43", |
|
"issue": "3", |
|
"pages": "209--226", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Baroni, M., Bernardini, S., Ferraresi, A., and Zanchetta, E. (2009). The WaCky Wide Web: a collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation, 43(3):209-226.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "FIASCO: Filtering the internet by automatic subtree classification", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Bauer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Degen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "X", |
|
"middle": [], |
|
"last": "Deng", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Herger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Gasthaus", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Giesbrecht", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Jansen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Kalina", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Kr\u00e4ger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "M\u00e4rtin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Schmidt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Scholler", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Steger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Stemle", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Evert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Building and Exploring Web Corpora: Proceedings of the 3rd Web as Corpus Workshop (WAC-3)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "111--121", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Bauer, D., Degen, J., Deng, X., Herger, P., Gasthaus, J., Giesbrecht, E., Jansen, L., Kalina, C., Kr\u00e4ger, T., M\u00e4rtin, R., Schmidt, M., Scholler, S., Steger, J., Stemle, E., and Evert, S. (2007). FIASCO: Filtering the internet by au- tomatic subtree classification. In Building and Exploring Web Corpora: Proceedings of the 3rd Web as Corpus Workshop (WAC-3), pages 111-121.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Introduction: Digital humanities, the web, and national web domains", |
|
"authors": [ |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Br\u00fcgger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Laursen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "The Historical Web and Digital Humanities", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--9", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Br\u00fcgger, N. and Laursen, D. (2019). Introduction: Digi- tal humanities, the web, and national web domains. In The Historical Web and Digital Humanities, pages 1-9. Routledge.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Quick and reliable document alignment via TF/IDF-weighted cosine distance", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Buck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Koehn", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the First Conference on Machine Translation", |
|
"volume": "2", |
|
"issue": "", |
|
"pages": "672--678", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Buck, C. and Koehn, P. (2016). Quick and reliable doc- ument alignment via TF/IDF-weighted cosine distance. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 672- 678, Berlin, Germany, August. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "VIPS: a Vision-based Page Segmentation Algorithm", |
|
"authors": [ |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Yu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J.-R", |
|
"middle": [], |
|
"last": "Wen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W.-Y", |
|
"middle": [], |
|
"last": "Ma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cai, D., Yu, S., Wen, J.-R., and Ma, W.-Y. (2003). VIPS: a Vision-based Page Segmentation Algorithm. Technical report, Microsoft Research.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "HTML web content extraction using paragraph tags", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Carey", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Manic", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "25th International Symposium on Industrial Electronics (ISIE)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1099--1105", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Carey, H. J. and Manic, M. (2016). HTML web content ex- traction using paragraph tags. In 25th International Sym- posium on Industrial Electronics (ISIE), pages 1099- 1105. IEEE.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Die Korpusplattform des", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Geyken", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Barbaresi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Didakowski", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Jurish", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Wiegand", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Lemnitzer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Digitalen W\u00f6rterbuchs der deutschen Sprache\" (DWDS). Zeitschrift f\u00fcr germanistische Linguistik", |
|
"volume": "45", |
|
"issue": "2", |
|
"pages": "327--344", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Geyken, A., Barbaresi, A., Didakowski, J., Jurish, B., Wiegand, F., and Lemnitzer, L. (2017). Die Kor- pusplattform des \"Digitalen W\u00f6rterbuchs der deutschen Sprache\" (DWDS). Zeitschrift f\u00fcr germanistische Lin- guistik, 45(2):327-344.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "SpeedReader: Reader Mode Made Fast and Private", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Ghasemisharif", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Snyder", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Aucinas", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Livshits", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "Proceedings of the World Wide Web Conference", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "526--537", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ghasemisharif, M., Snyder, P., Aucinas, A., and Livshits, B. (2019). SpeedReader: Reader Mode Made Fast and Private. In Proceedings of the World Wide Web Confer- ence, pages 526-537.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Evaluating content extraction on HTML documents", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Gottron", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Proceedings of the 2nd International Conference on Internet Technologies and Applications", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "123--132", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Gottron, T. (2007). Evaluating content extraction on HTML documents. In Proceedings of the 2nd Interna- tional Conference on Internet Technologies and Applica- tions, pages 123-132.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "DOM-based content extraction of HTML documents", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the 12th international conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "207--214", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "DOM-based content extraction of HTML documents. In Proceedings of the 12th international conference on World Wide Web, pages 207-214.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "C4Corpus: Multilingual Web-size corpus with free license", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "914--922", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "C4Corpus: Multilingual Web-size corpus with free license. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 914-922.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "news-please: A generic news crawler and extractor", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Hamborg", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Meuschke", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Breitinger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Gipp", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2017, |
|
"venue": "Proceedings of the 15th International Symposium of Information Science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "218--223", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hamborg, F., Meuschke, N., Breitinger, C., and Gipp, B. (2017). news-please: A generic news crawler and extrac- tor. In Maria Gaede, et al., editors, Proceedings of the 15th International Symposium of Information Science, pages 218-223.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Mining web informative structures and contents based on entropy analysis", |
|
"authors": [ |
|
{ |
|
"first": "H.-Y", |
|
"middle": [], |
|
"last": "Kao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S.-H", |
|
"middle": [], |
|
"last": "Lin", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J.-M", |
|
"middle": [], |
|
"last": "Ho", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M.-S", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "IEEE Transactions on Knowledge and Data Engineering", |
|
"volume": "16", |
|
"issue": "1", |
|
"pages": "41--55", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kao, H.-Y., Lin, S.-H., Ho, J.-M., and Chen, M.-S. (2004). Mining web informative structures and contents based on entropy analysis. IEEE Transactions on Knowledge and Data Engineering, 16(1):41-55.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "Googleology is bad science", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Kilgarriff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2007, |
|
"venue": "Computational Linguistics", |
|
"volume": "33", |
|
"issue": "", |
|
"pages": "147--151", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kilgarriff, A. (2007). Googleology is bad science. Com- putational Linguistics, 33(1):147-151.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "A Densitometric Approach to Web Page Segmentation", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Kohlsch\u00fctter", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Nejdl", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 17th ACM Conference on Information and Knowledge Management", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1173--1182", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kohlsch\u00fctter, C. and Nejdl, W. (2008). A Densitometric Approach to Web Page Segmentation. In Proceedings of the 17th ACM Conference on Information and Knowl- edge Management, pages 1173-1182.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Boilerplate detection using shallow text features", |
|
"authors": [], |
|
"year": null, |
|
"venue": "Proceedings of the Third ACM International Conference on Web Search and Data Mining, WSDM '10", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "441--450", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Boilerplate detection using shallow text features. In Pro- ceedings of the Third ACM International Conference on Web Search and Data Mining, WSDM '10, pages 441- 450.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "A New Proposal for Evaluating Web Page Cleaning Tools", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Lejeune", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Zhu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lejeune, G. and Zhu, L. (2018). A New Proposal for Eval- uating Web Page Cleaning Tools. Computaci\u00f3n y Sis- temas, 22(4).", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Daniel: Language independent character-based news surveillance", |
|
"authors": [ |
|
{ |
|
"first": "G", |
|
"middle": [], |
|
"last": "Lejeune", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Brixtel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Doucet", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Lucas", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "International Conference on NLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "64--75", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Lejeune, G., Brixtel, R., Doucet, A., and Lucas, N. (2012). Daniel: Language independent character-based news surveillance. In International Conference on NLP, pages 64-75. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "Content extraction using diverse feature sets", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Peters", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Lecocq", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 22nd International Conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "89--90", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Peters, M. E. and Lecocq, D. (2013). Content extraction using diverse feature sets. In Proceedings of the 22nd International Conference on World Wide Web, pages 89- 90.", |
|
"links": null |
|
}, |
|
"BIBREF27": { |
|
"ref_id": "b27", |
|
"title": "Translingual document representations from discriminative projections", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Platt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Toutanova", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Yih", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "251--261", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Platt, J., Toutanova, K., and Yih, W.-t. (2010). Translin- gual document representations from discriminative pro- jections. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 251-261, Cambridge, MA, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF28": { |
|
"ref_id": "b28", |
|
"title": "Removing boilerplate and duplicate content from web corpora", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Pomik\u00e1lek", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Masaryk University", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Pomik\u00e1lek, J. (2011). Removing boilerplate and duplicate content from web corpora. Ph.D. thesis, Masaryk Uni- versity.", |
|
"links": null |
|
}, |
|
"BIBREF29": { |
|
"ref_id": "b29", |
|
"title": "Hybrid model of content extraction", |
|
"authors": [ |
|
{ |
|
"first": "P", |
|
"middle": [ |
|
"A R" |
|
], |
|
"last": "Qureshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "N", |
|
"middle": [], |
|
"last": "Memon", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Journal of Computer and System Sciences", |
|
"volume": "78", |
|
"issue": "4", |
|
"pages": "1248--1257", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Qureshi, P. A. R. and Memon, N. (2012). Hybrid model of content extraction. Journal of Computer and System Sciences, 78(4):1248-1257.", |
|
"links": null |
|
}, |
|
"BIBREF30": { |
|
"ref_id": "b30", |
|
"title": "Main Content Detection in HTML Journal Articles", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Rae", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Kim", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Le", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "G", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Thoma", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "Proceedings of the ACM Symposium on Document Engineering 2018", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--4", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Rae, A. R., Kim, J., Le, D., and Thoma, G. R. (2018). Main Content Detection in HTML Journal Articles. In Proceedings of the ACM Symposium on Document Engi- neering 2018, pages 1-4, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF31": { |
|
"ref_id": "b31", |
|
"title": "Pattern Matching: The Gestalt Approach", |
|
"authors": [ |
|
{ |
|
"first": "J", |
|
"middle": [ |
|
"W" |
|
], |
|
"last": "Ratcliff", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Metzener", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1988, |
|
"venue": "Dr. Dobb's Journal", |
|
"volume": "13", |
|
"issue": "7", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ratcliff, J. W. and Metzener, D. E. (1988). Pattern Matching: The Gestalt Approach. Dr. Dobb's Journal, 13(7):46.", |
|
"links": null |
|
}, |
|
"BIBREF32": { |
|
"ref_id": "b32", |
|
"title": "The Good, the Bad, and the Hazy: Design Decisions in Web Corpus Construction", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sch\u00e4fer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Barbaresi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Bildhauer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2013, |
|
"venue": "Proceedings of the 8th Web as Corpus Workshop", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "7--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sch\u00e4fer, R., Barbaresi, A., and Bildhauer, F. (2013). The Good, the Bad, and the Hazy: Design Decisions in Web Corpus Construction. In Proceedings of the 8th Web as Corpus Workshop, pages 7-15.", |
|
"links": null |
|
}, |
|
"BIBREF33": { |
|
"ref_id": "b33", |
|
"title": "Focused Web Corpus Crawling", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sch\u00e4fer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Barbaresi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Bildhauer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2014, |
|
"venue": "Proceedings of the 9th Web as Corpus workshop (WAC-9)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "9--15", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sch\u00e4fer, R., Barbaresi, A., and Bildhauer, F. (2014). Fo- cused Web Corpus Crawling. In Proceedings of the 9th Web as Corpus workshop (WAC-9), pages 9-15.", |
|
"links": null |
|
}, |
|
"BIBREF34": { |
|
"ref_id": "b34", |
|
"title": "CommonCOW: Massively Huge Web Corpora from CommonCrawl Dataand a Method to Distribute them Freely under Restrictive EU Copyright Laws", |
|
"authors": [ |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Sch\u00e4fer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC'16)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "4500--4504", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sch\u00e4fer, R. (2016). CommonCOW: Massively Huge Web Corpora from CommonCrawl Dataand a Method to Distribute them Freely under Restrictive EU Copy- right Laws. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC'16), pages 4500-4504.", |
|
"links": null |
|
}, |
|
"BIBREF35": { |
|
"ref_id": "b35", |
|
"title": "Victor: the Web-Page Cleaning Tool", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Spousta", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Marek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Pecina", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "4th Web as Corpus Workshop (WAC-4)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "12--17", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Spousta, M., Marek, M., and Pecina, P. (2008). Victor: the Web-Page Cleaning Tool. In 4th Web as Corpus Work- shop (WAC-4), pages 12-17.", |
|
"links": null |
|
}, |
|
"BIBREF36": { |
|
"ref_id": "b36", |
|
"title": "DOM-based content extraction via text density", |
|
"authors": [ |
|
{ |
|
"first": "F", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Song", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Liao", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "245--254", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Sun, F., Song, D., and Liao, L. (2011). DOM-based con- tent extraction via text density. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 245- 254.", |
|
"links": null |
|
}, |
|
"BIBREF37": { |
|
"ref_id": "b37", |
|
"title": "Web2text: Deep structured boilerplate removal", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Vogels", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "O.-E", |
|
"middle": [], |
|
"last": "Ganea", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Eickhoff", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "European Conference on Information Retrieval", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "167--179", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Vogels, T., Ganea, O.-E., and Eickhoff, C. (2018). Web2text: Deep structured boilerplate removal. In Eu- ropean Conference on Information Retrieval, pages 167- 179. Springer.", |
|
"links": null |
|
}, |
|
"BIBREF38": { |
|
"ref_id": "b38", |
|
"title": "CETR: content extraction via tag ratios", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Weninger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [ |
|
"H" |
|
], |
|
"last": "Hsu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "J", |
|
"middle": [], |
|
"last": "Han", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of the 19th international conference on World Wide Web", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "971--980", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weninger, T., Hsu, W. H., and Han, J. (2010). CETR: con- tent extraction via tag ratios. In Proceedings of the 19th international conference on World Wide Web, pages 971- 980.", |
|
"links": null |
|
}, |
|
"BIBREF39": { |
|
"ref_id": "b39", |
|
"title": "Web Content Extraction -a Meta-Analysis of its Past and Thoughts on its Future", |
|
"authors": [ |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Weninger", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "Palacios", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "V", |
|
"middle": [], |
|
"last": "Crescenzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Gottron", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "P", |
|
"middle": [], |
|
"last": "Merialdo", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2016, |
|
"venue": "ACM SIGKDD Explorations Newsletter", |
|
"volume": "17", |
|
"issue": "2", |
|
"pages": "17--23", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weninger, T., Palacios, R., Crescenzi, V., Gottron, T., and Merialdo, P. (2016). Web Content Extraction -a Meta- Analysis of its Past and Thoughts on its Future. ACM SIGKDD Explorations Newsletter, 17(2):17-23.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF1": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Corpus statistics on the original Html pages and their manually cleaned versions\u2022 DRAG for DRAGNET;", |
|
"type_str": "table" |
|
}, |
|
"TABREF3": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Data</td><td>el</td><td>en</td><td>pl</td><td>ru</td><td>zh</td></tr><tr><td>BP3</td><td colspan=\"2\">31.9% 6.9%</td><td>2.2%</td><td>5.7%</td><td>1.0%</td></tr><tr><td>BP3 Art</td><td colspan=\"2\">30.8% 6.9%</td><td>2.2%</td><td>5.3%</td><td>0.7%</td></tr><tr><td>BP3 KeepE</td><td>0.0%</td><td>3.6%</td><td>0.7%</td><td>1.9%</td><td>0.0%</td></tr><tr><td>BP3 Larg</td><td colspan=\"2\">30.8% 6.9%</td><td>2.2%</td><td>5.3%</td><td>1.0%</td></tr><tr><td colspan=\"5\">DRAGNET 49.1% 1.3% 10.9% 23.2%</td><td>3.4%</td></tr><tr><td>GOOSE</td><td colspan=\"5\">99.3% 1.5% 11.7% 65.4% 28.0%</td></tr><tr><td>HTML2T</td><td>0.0%</td><td>0.0%</td><td>0.0%</td><td>0.0%</td><td>0.0%</td></tr><tr><td>INSCRI</td><td>0.0%</td><td>0.0%</td><td>0.0%</td><td>0.0%</td><td>0.0%</td></tr><tr><td>JT</td><td>1.8%</td><td>4.2%</td><td>0.0%</td><td>0.4%</td><td>28.7%</td></tr><tr><td>JT en</td><td colspan=\"5\">98.2% 4.2% 99.6% 99.6% 29.2%</td></tr><tr><td>JT langid</td><td>1.8%</td><td>4.2%</td><td>0.0%</td><td>0.4%</td><td>28.7%</td></tr><tr><td>NEWSP</td><td colspan=\"5\">95.2% 1.0% 22.6% 95.4% 29.2%</td></tr><tr><td>NEWSP</td><td colspan=\"2\">46.5% 1.3%</td><td>5.1%</td><td colspan=\"2\">65.0% 92.9%</td></tr><tr><td>READ</td><td>0.7%</td><td>1.3%</td><td>2.2%</td><td>0.4%</td><td>17.9%</td></tr></table>", |
|
"text": "Statistics on the output of the different tools and configurations", |
|
"type_str": "table" |
|
}, |
|
"TABREF5": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF7": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Evaluation with the clean-eval metric, sorted by descending f-score (computed on the mean precision and the mean recall)", |
|
"type_str": "table" |
|
}, |
|
"TABREF9": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>Tool</td><td>f-score</td><td>precision</td><td>recall</td></tr><tr><td>BP3 Art</td><td colspan=\"3\">76.38 80.60 (\u00b124) 72.57 (\u00b133)</td></tr><tr><td>BP3 Larg</td><td colspan=\"3\">74.54 82.90 (\u00b124) 67.72 (\u00b133)</td></tr><tr><td>JT</td><td colspan=\"3\">74.13 81.36 (\u00b123) 68.08 (\u00b137)</td></tr><tr><td>JT langid</td><td colspan=\"3\">73.73 81.50 (\u00b123) 67.31 (\u00b137)</td></tr><tr><td>READ</td><td colspan=\"3\">73.25 72.43 (\u00b128) 74.09 (\u00b130)</td></tr><tr><td>BP3</td><td colspan=\"3\">72.50 74.27 (\u00b124) 70.81 (\u00b132)</td></tr><tr><td>DRAGNET</td><td colspan=\"3\">67.09 86.82 (\u00b121) 54.67 (\u00b137)</td></tr><tr><td>NPLEASE</td><td colspan=\"3\">66.64 92.03 (\u00b117) 52.23 (\u00b144)</td></tr><tr><td>GOOSE</td><td colspan=\"3\">57.74 89.42 (\u00b119) 42.64 (\u00b142)</td></tr><tr><td>NPAPER</td><td colspan=\"3\">54.78 88.68 (\u00b118) 39.63 (\u00b143)</td></tr><tr><td colspan=\"4\">BP3 KeepE 42.02 27.41 (\u00b121) 89.98 (\u00b118)</td></tr><tr><td>JT en</td><td colspan=\"3\">41.35 88.09 (\u00b118) 27.01 (\u00b139)</td></tr><tr><td>INSCRI</td><td colspan=\"3\">37.10 23.22 (\u00b118) 92.22 (\u00b113)</td></tr><tr><td>HTML2T</td><td colspan=\"3\">33.45 20.56 (\u00b117) 89.80 (\u00b114)</td></tr></table>", |
|
"text": "Evaluation with the voc eval metric", |
|
"type_str": "table" |
|
}, |
|
"TABREF10": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Evaluation with the occ eval metric", |
|
"type_str": "table" |
|
}, |
|
"TABREF12": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"4\">: Evaluation with the KL eval metric, euclidean</td></tr><tr><td colspan=\"2\">and cosine distances</td><td/><td/></tr><tr><td>Tool</td><td>f-score</td><td>precision</td><td>recall</td></tr><tr><td>NPAPER</td><td colspan=\"3\">90.36 90.39 (\u00b117) 90.33 (\u00b115)</td></tr><tr><td>GOOSE</td><td colspan=\"3\">89.76 92.01 (\u00b117) 87.62 (\u00b116)</td></tr><tr><td>DRAGNET</td><td colspan=\"3\">88.01 87.80 (\u00b121) 88.23 (\u00b119)</td></tr><tr><td>NPLEASE</td><td colspan=\"3\">87.83 86.86 (\u00b116) 88.83 (\u00b115)</td></tr><tr><td>READ</td><td colspan=\"3\">86.21 83.50 (\u00b119) 89.11 (\u00b116)</td></tr><tr><td>BP3 Art</td><td colspan=\"3\">85.95 86.18 (\u00b118) 85.71 (\u00b128)</td></tr><tr><td>JT</td><td colspan=\"3\">83.63 82.04 (\u00b123) 85.29 (\u00b125)</td></tr><tr><td>BP3 Larg</td><td colspan=\"3\">82.92 87.26 (\u00b120) 78.98 (\u00b130)</td></tr><tr><td>JT langid</td><td colspan=\"3\">82.68 82.03 (\u00b124) 83.34 (\u00b126)</td></tr><tr><td>JT en</td><td colspan=\"3\">82.68 82.03 (\u00b124) 83.34 (\u00b126)</td></tr><tr><td>BP3</td><td colspan=\"3\">81.40 77.32 (\u00b120) 85.94 (\u00b126)</td></tr><tr><td colspan=\"4\">BP3 KeepE 52.38 36.36 (\u00b121) 93.65 (\u00b120)</td></tr><tr><td>INSCRI</td><td colspan=\"3\">45.74 29.81 (\u00b117) 98.24 (\u00b14)</td></tr><tr><td>HTML2T</td><td colspan=\"3\">44.17 28.70 (\u00b117) 95.82 (\u00b17)</td></tr></table>", |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF13": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF15": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"4\">: Evaluation with the occ eval metric (docu-</td></tr><tr><td>ments in English)</td><td/><td/><td/></tr><tr><td>Tool</td><td>f-score</td><td>precision</td><td>recall</td></tr><tr><td>JT langid</td><td colspan=\"3\">88.95 90.41 (\u00b121) 87.54 (\u00b121)</td></tr><tr><td>JT</td><td colspan=\"3\">88.80 89.97 (\u00b121) 87.66 (\u00b121)</td></tr><tr><td>READ</td><td colspan=\"3\">86.62 83.03 (\u00b119) 90.54 (\u00b111)</td></tr><tr><td>BP3 Art</td><td colspan=\"3\">74.63 88.17 (\u00b119) 64.70 (\u00b144)</td></tr><tr><td>BP3 Larg</td><td colspan=\"3\">74.58 89.56 (\u00b118) 63.90 (\u00b143)</td></tr><tr><td>BP3</td><td colspan=\"3\">74.17 87.60 (\u00b117) 64.31 (\u00b144)</td></tr><tr><td>NPLEASE</td><td colspan=\"3\">65.07 96.00 (\u00b112) 49.21 (\u00b147)</td></tr><tr><td colspan=\"4\">BP3 KeepE 51.20 34.79 (\u00b116) 96.92 (\u00b15)</td></tr><tr><td>INSCRI</td><td colspan=\"3\">50.66 34.21 (\u00b115) 97.56 (\u00b15)</td></tr><tr><td>DRAGNET</td><td colspan=\"3\">43.82 93.94 (\u00b115) 28.57 (\u00b133)</td></tr><tr><td>HTML2T</td><td colspan=\"3\">41.03 26.06 (\u00b114) 96.39 (\u00b15)</td></tr><tr><td>NPAPER</td><td>5.58</td><td colspan=\"2\">92.98 (\u00b118) 2.88 (\u00b112)</td></tr><tr><td>GOOSE</td><td>2.98</td><td>95.11 (\u00b112)</td><td>1.51 (\u00b16)</td></tr><tr><td>JT en</td><td>2.33</td><td>94.10 (\u00b116)</td><td>1.18 (\u00b11)</td></tr></table>", |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF16": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"4\">: Evaluation with the occ eval metric (docu-</td></tr><tr><td>ments in Greek)</td><td/><td/><td/></tr><tr><td>Tool</td><td>f-score</td><td>precision</td><td>recall</td></tr><tr><td>BP3 Art</td><td colspan=\"3\">84.20 85.11 (\u00b122) 83.32 (\u00b126)</td></tr><tr><td>NPLEASE</td><td colspan=\"3\">83.13 86.02 (\u00b121) 80.44 (\u00b129)</td></tr><tr><td>JT</td><td colspan=\"3\">82.47 77.71 (\u00b125) 87.85 (\u00b117)</td></tr><tr><td>JT langid</td><td colspan=\"3\">82.15 77.89 (\u00b125) 86.90 (\u00b118)</td></tr><tr><td>BP3 Larg</td><td colspan=\"3\">81.40 86.24 (\u00b123) 77.07 (\u00b128)</td></tr><tr><td>DRAGNET</td><td colspan=\"3\">79.79 85.84 (\u00b121) 74.54 (\u00b133)</td></tr><tr><td>READ</td><td colspan=\"3\">79.23 77.50 (\u00b123) 81.03 (\u00b124)</td></tr><tr><td>BP3</td><td colspan=\"3\">78.11 73.03 (\u00b124) 83.96 (\u00b123)</td></tr><tr><td>GOOSE</td><td colspan=\"3\">74.84 86.32 (\u00b125) 66.05 (\u00b135)</td></tr><tr><td>NPAPER</td><td colspan=\"3\">73.86 85.04 (\u00b121) 65.28 (\u00b141)</td></tr><tr><td colspan=\"4\">BP3 KeepE 48.42 32.69 (\u00b118) 93.35 (\u00b114)</td></tr><tr><td>INSCRI</td><td colspan=\"3\">43.28 28.00 (\u00b116) 95.28 (\u00b111)</td></tr><tr><td>HTML2T</td><td colspan=\"3\">36.06 22.45 (\u00b115) 91.57 (\u00b111)</td></tr><tr><td>JT en</td><td>1.96</td><td>91.06 (\u00b116)</td><td>0.99 (\u00b11)</td></tr></table>", |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF17": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td>: Evaluation with the occ eval metric (docu-</td></tr><tr><td>ments in Polish)</td></tr></table>", |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF18": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"4\">: Evaluation with the occ eval metric (docu-</td></tr><tr><td>ments in Russian)</td><td/><td/><td/></tr><tr><td>Tool</td><td>f-score</td><td>precision</td><td>recall</td></tr><tr><td>BP3 Art</td><td colspan=\"3\">63.30 71.28 (\u00b124) 56.93 (\u00b122)</td></tr><tr><td>BP3 Larg</td><td colspan=\"3\">57.95 72.53 (\u00b124) 48.26 (\u00b122)</td></tr><tr><td>BP3</td><td colspan=\"3\">55.20 70.08 (\u00b125) 45.53 (\u00b119)</td></tr><tr><td>DRAGNET</td><td colspan=\"3\">44.53 81.81 (\u00b123) 30.59 (\u00b118)</td></tr><tr><td>READ</td><td colspan=\"3\">42.36 48.00 (\u00b132) 37.91 (\u00b128)</td></tr><tr><td>GOOSE</td><td colspan=\"3\">20.60 82.54 (\u00b117) 11.77 (\u00b19)</td></tr><tr><td>JT langid</td><td colspan=\"3\">19.19 82.32 (\u00b117) 10.86 (\u00b15)</td></tr><tr><td>JT</td><td colspan=\"3\">19.19 82.32 (\u00b117) 10.86 (\u00b15)</td></tr><tr><td>JT en</td><td colspan=\"3\">19.18 82.80 (\u00b117) 10.84 (\u00b15)</td></tr><tr><td>NPAPER</td><td colspan=\"3\">19.17 82.72 (\u00b117) 10.84 (\u00b15)</td></tr><tr><td colspan=\"4\">BP3 KeepE 19.08 10.85 (\u00b115) 78.94 (\u00b118)</td></tr><tr><td>HTML2T</td><td>13.83</td><td colspan=\"2\">7.62 (\u00b111) 74.87 (\u00b115)</td></tr><tr><td>NPLEASE</td><td colspan=\"3\">13.31 97.52 (\u00b112) 7.14 (\u00b113)</td></tr><tr><td>INSCRI</td><td>12.97</td><td colspan=\"2\">7.06 (\u00b110) 79.52 (\u00b114)</td></tr></table>", |
|
"text": "", |
|
"type_str": "table" |
|
}, |
|
"TABREF19": { |
|
"html": null, |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "Evaluation with the occ eval metric (documents in Chinese), evaluation by character n-grams", |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |