File size: 38,240 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T04:33:19.503902Z"
    },
    "title": "Current Challenges in Web Corpus Building",
    "authors": [
        {
            "first": "Milo\u0161",
            "middle": [],
            "last": "Jakub\u00ed\u010dek",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Lexical Computing & Masaryk University",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Vojt\u011bch",
            "middle": [],
            "last": "Kov\u00e1\u0159",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Lexical Computing & Masaryk University",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "Pavel",
            "middle": [],
            "last": "Rychl\u00fd",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Lexical Computing & Masaryk University",
                "location": {}
            },
            "email": ""
        },
        {
            "first": "V\u00edt",
            "middle": [],
            "last": "Suchomel",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Lexical Computing & Masaryk University",
                "location": {}
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "In this paper we discuss some of the current challenges in web corpus building that we faced in the recent years when expanding the corpora in Sketch Engine. The purpose of the paper is to provide an overview and raise discussion on possible solutions, rather than bringing ready solutions to the readers. For every issue we try to assess its severity and briefly discuss possible mitigation options.",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "In this paper we discuss some of the current challenges in web corpus building that we faced in the recent years when expanding the corpora in Sketch Engine. The purpose of the paper is to provide an overview and raise discussion on possible solutions, rather than bringing ready solutions to the readers. For every issue we try to assess its severity and briefly discuss possible mitigation options.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Web corpus building has been the major way of obtaining large text collections for almost two decades now (see (Kilgarriff and Grefenstette, 2003) for a starting point and (Sch\u00e4fer and Bildhauer, 2013) for a current overview) and there have been many web corpora built isolated (using methods such as WebBootCat (Baroni et al., )) or as part of a bigger corpus family such as (Jakub\u00ed\u010dek et al., 2013) , (Benko, 2014) or (Biemann et al., 2007) . Web corpora have been used as the primary source of linguistic evidence for many purposes. Besides linguistic research itself, the main areas of application included development and evaluation of natural language processing tools and methods, computer lexicography or practical analysis of large texts for varying tasks like trends or topics monitoring. Building corpora from web has become popular for all the advantages it brings: small building costs, high speed of building and prospects on getting a very large dataset that would perform well in Zipfian distribution were reasons that are still very relevant, perhaps even more than before as NLP becomes more widespread and used in projects on a daily basis and many NLP methods (such as word embeddings) rely on large text corpora. Sadly, most of the disadvantages of using web corpora have not been overcome in the 20 years: web corpora still provide only a very limited set of metadata, it is still difficult to clean the web content automatically and on the legal front there has not been any significant progress that would clarify the legal status of the datasets 1 . In this paper we are not going to discuss the advantages and disadvantages of web corpus building but take a very practical look at the biggest obstacles for web corpus building as of 2020. The starting point for all reasoning is that one aims at building a corpus from web which should be as big as possible and as clean as possible, where by clean we merely restrict ourselves to technical cleaning: yielding well-formed and well-encoded documents containing human-produced natural language texts, ideally (but not necessarily) split into paragraphs or sentences. The issues that we mention are basically those that we have faced in the recent years when building corpora for the Ten-Ten corpus family programme. (Jakub\u00ed\u010dek et al., 2013) 1 In the European Union.",
                "cite_spans": [
                    {
                        "start": 111,
                        "end": 146,
                        "text": "(Kilgarriff and Grefenstette, 2003)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 172,
                        "end": 201,
                        "text": "(Sch\u00e4fer and Bildhauer, 2013)",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 376,
                        "end": 400,
                        "text": "(Jakub\u00ed\u010dek et al., 2013)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 403,
                        "end": 416,
                        "text": "(Benko, 2014)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 420,
                        "end": 442,
                        "text": "(Biemann et al., 2007)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 2290,
                        "end": 2314,
                        "text": "(Jakub\u00ed\u010dek et al., 2013)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "In the US, the case law on related projects like Google Books (https: //en.wikipedia.org/wiki/Authors_Guild,_Inc. _v._Google,_Inc.) paved the way for more relaxed web corpus usage.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "2.1. Machine Translation 2.1.1. The problem Machine translation is ubiquitous on the web. Surprisingly, it is rather low-resourced language webs affected the most by machine translation, where the quality of machine translation is often very poor, but the market size simply does not make the case for human translation. Website owners are therefore confronted with a rather simple choice: either no content for that particular low-resourced language, or (poor, but) machine translated. Where reputation does not play a big role (and that means: hobbyists, fans, cheap sales websites, blogs platforms etc.), the choice is frequently to use machine translation, whatever its quality would be.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Current Issues",
                "sec_num": "2."
            },
            {
                "text": "Detecting machine translated content automatically is very difficult and there are no language-independent methods with reasonable precision-recall trade offs. Recall that this is in the first place a problem for low-resourced languages, which typically suffer from limited online content anyway. Thus applying any high-recall/low-precision strategies likely harms the size of the resulting dataset significantly and the most efficient way lies in using semiautomated methods: typically this involves hiring a native speaker for several days, checking the corpus wordlist and most represented web domains to discover \"nests\" of machine translated content and remove the whole domains. The general rule of thumb is: if a website offers many language versions, it is likely that most or all are machine translated. If there is an Esperanto version, it is always machine translated. In one of the most recent crawls of Estonian, which was carried out at the end of 2019 to create Estonian National Corpus 2019 (Kallas et al., 2015) in collaboration with the Institute for Estonian Language, we have generated a list of 600 most represented web domains which were manually inspected and 110 sites were removed from the corpus since their content was computer generated. Another observation was made when cleaning a Lao Web corpus from 2019. 761 of 991 (77 %) domains with URI paths beginning with \"/lo/\" were identified as \"bad language\" by a Lao native speaker 2 based on samples of texts from particular domains. Since many of these bad language samples looked like machine translated, our hypothesis that URI path can indicate machine translated content was confirmed. Together with a manual inspection of most represented domains in the corpus, approximately 9 % of tokens in the corpus were removed.",
                "cite_spans": [
                    {
                        "start": 1007,
                        "end": 1028,
                        "text": "(Kallas et al., 2015)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mitigation strategies",
                "sec_num": "2.1.2."
            },
            {
                "text": "The severity of this issue is very high. The whole point about building corpora is to provide authentic evidence of language use and anything that hampers this idea represents a serious problem.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "2.1.3."
            },
            {
                "text": "2.2.1. The problem Spam represents a similar issue to the machine-generated content in terms that it also brings unnatural and thus unwanted content into the corpus. While it may not necessarily be automatically generated, it frequently is and spammers have been improving the text generation algorithms (including by means of applying NLP methods) during their long battle with search engines over the past years. There are, however, notable differences from the machinetranslated content that have huge impact on how this should be dealt with. While machine translation is used permanently, intentionally (by website owners) and legally, spam typically occurs on someone's website as its temporary, illegal and random misuse. Such hacked websites are then used a honeypot to bring the user to some (less temporary, but also not very permanent) target site. The illegality is also related to the topic of spamming: it tends to cover areas that are (in a particular country) prohibited or massively regulated, such as drugs, pharmacy, lottery, guns, loans and mortgages or prostitution. The topic heavily depends on the country and its regulations. The temporal aspects of spam fighting may be crucial to fight it successfully. In our experience it was almost never possible to access a spam site several weeks after it has been crawled, because it was already cleaned and either shut down or previous content was restored. It is also likely the reason why search engines seem to fight spam rather well by analyzing its dynamic and temporary properties, but for web crawling by means of taking a static snapshot of a web, it is still a serious issue. During the past five years we have been regularly discovering spam sites where it took several minutes for a trained NLP engineers to conclude that this is a spam site. The spam site was mimicking a regular institutional website (such as of an U.S. university) including all its typical parts (courses, enrollment etc.), but starting with level 3 or 4 of nested links on the website, spam content was found which was completely unrelated to the institution. Notably, the institution was completely made up, so this was not a hacked institutional website, but a hacked domain with completely invented content.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Spam",
                "sec_num": "2.2."
            },
            {
                "text": "Automatic mitigation strategies may focus on the temporal aspects of spamming and involve:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mitigation strategies",
                "sec_num": "2.2.2."
            },
            {
                "text": "\u2022 starting the crawl from a set of trustworthy seed domains obtained from web directories such as curlie.org, formerly dmoz.org, lists of newspapers (e.g. onlinenewspapers.com) which are less likely to get hacked",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mitigation strategies",
                "sec_num": "2.2.2."
            },
            {
                "text": "\u2022 measuring domain distance from seed domains and not deviating too deep from the seed domains",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mitigation strategies",
                "sec_num": "2.2.2."
            },
            {
                "text": "\u2022 using hostname heuristics (long hostnames consisting of multiple words are likely to be computer generated and containing spam)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mitigation strategies",
                "sec_num": "2.2.2."
            },
            {
                "text": "Manual strategies are similar to the machine translation but thanks to the fact that spam is, unlike machine translated content, topical, one can use more analytic approaches than just looking up most frequent domains. Inspecting the usual suspects (like viagra, loan, lottery, . . . ) by means of collocations (in our case, word sketches) or other analytical tools can quickly reveal lot of spam content. A complete solution to this problem would basically involve the same efforts that search engines put into this which is typically not feasible for a small company or NLP department. Out of all the aspects of spam, the temporality makes it most vulnerable: having most of the web indexed and permanently checking updates allows the crawler to temporarily suspend domains that suddenly completely or significantly change the content and this strategy could largely prevent getting spam into corpora without introducing any biases.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mitigation strategies",
                "sec_num": "2.2.2."
            },
            {
                "text": "This is a very severe issue for the same reason like the ones given for machine translated texts.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "2.2.3."
            },
            {
                "text": "2.3.1. The problem Web crawling began as soon as Internet was sufficiently populated with texts. At that time, Internet consisted mostly of (plain) texts and as it became widespread in the developed world, everybody -institutions, companies, shopswent online, providing lots of natural language usage. Unfortunately, in many less developed countries where Internet became widespread later, going online meant creating a social network profile. As result, in these countries the Internet outside of social networks is simply much smaller and many companies and institutions have merely a Facebook page. Thus, while the Internet is now easily accessible, widespread and those countries are heavily populated, one only gets a fraction by crawling publicly accessible websites compared to similarly sized (in terms of native speakers) developed countries e.g. in Europe. An example is e.g. Laos, a country with over 7 million citizens out of which over 25 % are online 3 where after extensive crawling for about half a year we were only able to obtain (after cleaning) a corpus of about 100 million words (whereas, in a country like Slovenia with 2 million citizens out of which almost 80 % are online, one can crawl a billion-word-sized corpus with no extra efforts).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Closed Content",
                "sec_num": "2.3."
            },
            {
                "text": "We have also experienced more multimedia usage in these countries over textual content. But whether this is an unrelated issue or not would require more investigation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Closed Content",
                "sec_num": "2.3."
            },
            {
                "text": "None. This paragraph might be a simple as that. Accessing social network content programmatically for the purposes of web crawling is typically not only illegal, but also technically very limited or impossible. Also, after more and more data privacy scandals around many social networks, their policies for data access and sharing have been tightened a lot and there are no prospects of this changing anytime soon. When people switch from open internet to closed platforms, it is over for linguistic web crawling.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mitigation strategies",
                "sec_num": "2.3.2."
            },
            {
                "text": "This is a non-issue for \"old\" internet countries, big issue for \"new\" internet countries and generally a threat for the future if more and more online content is being shifted from open internet into closed (social media-like) platforms.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "2.3.3."
            },
            {
                "text": "2.4.1. The problem Modern websites rely more and more on dynamic content that is rendered in the client browser. While this brings better user experience and new functionalities, it also represents quite a technical challenge when crawling the texts from such websites. If yielding the texts requires rendering the content using a browser engine, it slows down the processing of a single website by several orders of magnitude.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dynamic Content",
                "sec_num": "2.4."
            },
            {
                "text": "The only general solution really is to run a browser in headless mode and pass each found website to it, render its content as HTML and process it as usual. Some websites offer an HTML-only version to mobile browsers but it is not clear whether this could be applied generally (many other websites may still not be very mobile friendly).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mitigation strategies",
                "sec_num": "2.4.2."
            },
            {
                "text": "The severity of this issue is so far rather low because websites still tend to provide textual fallback (e.g. for old mobile phones). As soon as they stop doing so, crawling will need to involve website rendering.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "2.4.3."
            },
            {
                "text": "2.5. Paid Content 2.5.1. The problem Early internet witnessed free news which, when the Internet population started to rise, were accompanied by ads. It is now clear that this was only a transition model from printed to online news and the revenues from online advertising (severely hindered by many users intentionally using tools for blocking adverts) are not sufficient to replace the fallen revenues on printed media subscriptions. Increasingly more media publishers therefore investigate new business models that incorporate online subscriptions (Fletcher and Nielsen, 2017 ) and a freemium model (a limited number of free articles per month, or limited set of articles, with other being paid) slowly becomes the new standard. Unfortunately the same news sources often represented valuable parts of the web corpus and if they become entirely missing, a whole genre of texts might become omitted.",
                "cite_spans": [
                    {
                        "start": 551,
                        "end": 578,
                        "text": "(Fletcher and Nielsen, 2017",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "2.4.3."
            },
            {
                "text": "If at some point indeed most, or most quality, newspapers become completely unavailable without paying, web crawling such websites will either require paying (typically very modest) fee for a regular subscription or negotiating some access with the newspapers. The most problematic part is that this would require a per-website solution which significantly harms the current scalability of web crawling. Even if one manages to negotiate free access to the newspapers, it will still require developing customized solutions to incorporate data from that particular news.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Mitigation strategies",
                "sec_num": "2.5.2."
            },
            {
                "text": "Not very severe as long as reasonable amount of the newspaper text type remains freely accessible. But after that, this will represent an issue mainly for linguistic research focusing on this particular genre of texts.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Severity",
                "sec_num": "2.5.3."
            },
            {
                "text": "In this paper we briefly discuss some issues of web crawling that we have stumbled upon most frequently in the recent years. The list is by no means complete and comprehensive and its whole purpose is to raise discussion at the workshop around the individual issues, possibly sharing further ideas on how to mitigate them. Trying to predict the future of web crawling is tempting but of course hard. One may though imagine that the homogeneous Internet, as we know it now, slowly collapses into:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "3."
            },
            {
                "text": "\u2022 content provided through some kind of web applications, possibly close or available only after payment",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "3."
            },
            {
                "text": "\u2022 the rest The key question is how big the rest is going to be and whether it will be big enough and of sufficient quality to keep web crawling serving its current purpose. If not, it will require different approaches, which we may not even call crawling then.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "3."
            },
            {
                "text": "This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 731015. This work has been partly supported by the Ministry of Education of CR within the LINDAT-CLARIAH-CZ project LM2018101 and by the Grant Agency of CR within the project 18-23891S.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": "4."
            },
            {
                "text": "Native speakers were asked to choose from three options: \"good\", \"bad\" or \"I can't tell\"",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "Data taken from https://en.wikipedia. org/wiki/List_of_countries_by_number_of_ Internet_users.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Webbootcat: instant domain-specific corpora to support human translators",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Bibliographical References",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Baroni",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Kilgarriff",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Pomik\u00e1lek",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Rychl\u1ef3",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Bibliographical References Baroni, M., Kilgarriff, A., Pomik\u00e1lek, J., Rychl\u1ef3, P., et al. ). Webbootcat: instant domain-specific corpora to sup- port human translators.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Aranea: Yet another family of (comparable) web corpora",
                "authors": [
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Benko",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "International Conference on Text, Speech, and Dialogue",
                "volume": "",
                "issue": "",
                "pages": "247--256",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Benko, V. (2014). Aranea: Yet another family of (compa- rable) web corpora. In International Conference on Text, Speech, and Dialogue, pages 247-256. Springer.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "The leipzig corpora collection-monolingual corpora of standard size",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Biemann",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Heyer",
                        "suffix": ""
                    },
                    {
                        "first": "U",
                        "middle": [],
                        "last": "Quasthoff",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Richter",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of Corpus Linguistic",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Biemann, C., Heyer, G., Quasthoff, U., and Richter, M. (2007). The leipzig corpora collection-monolingual cor- pora of standard size. Proceedings of Corpus Linguistic, 2007.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Paying for online news",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Fletcher",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [
                            "K"
                        ],
                        "last": "Nielsen",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Digital Journalism",
                "volume": "5",
                "issue": "9",
                "pages": "1173--1191",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fletcher, R. and Nielsen, R. K. (2017). Paying for online news. Digital Journalism, 5(9):1173-1191.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "The tenten corpus family. Corpus Linguistics",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Jakub\u00ed\u010dek",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Kilgarriff",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Kov\u00e1\u0159",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Rychl\u1ef3",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Suchomel",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jakub\u00ed\u010dek, M., Kilgarriff, A., Kov\u00e1\u0159, V., Rychl\u1ef3, P., and Suchomel, V. (2013). The tenten corpus family. Corpus Linguistics 2013, page 125.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Automatic generation of the estonian collocations dictionary database",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Kallas",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Kilgarriff",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Koppel",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Kudritski",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Langemets",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Michelfeit",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Tuulik",
                        "suffix": ""
                    },
                    {
                        "first": "\u00dc",
                        "middle": [],
                        "last": "Viks",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "11--13",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kallas, J., Kilgarriff, A., Koppel, K., Kudritski, E., Langemets, M., Michelfeit, J., Tuulik, M., and Viks,\u00dc. (2015). Automatic generation of the estonian colloca- tions dictionary database. In Electronic lexicography in the 21st century: linking lexical data in the digital age. Proceedings of the eLex 2015 conference, pages 11-13.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Introduction to the special issue on the web as corpus",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Kilgarriff",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Grefenstette",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Computational linguistics",
                "volume": "29",
                "issue": "3",
                "pages": "333--347",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kilgarriff, A. and Grefenstette, G. (2003). Introduction to the special issue on the web as corpus. Computational linguistics, 29(3):333-347.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Web corpus construction",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Sch\u00e4fer",
                        "suffix": ""
                    },
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Bildhauer",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Synthesis Lectures on Human Language Technologies",
                "volume": "6",
                "issue": "4",
                "pages": "1--145",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sch\u00e4fer, R. and Bildhauer, F. (2013). Web corpus construc- tion. Synthesis Lectures on Human Language Technolo- gies, 6(4):1-145.",
                "links": null
            }
        },
        "ref_entries": {}
    }
}