Datasets:

Modalities:
Text
Formats:
json
Libraries:
Datasets
pandas
fheilz commited on
Commit
fb2c626
1 Parent(s): ed9ee44

Upload 9 files

Browse files
Files changed (9) hide show
  1. 2016.jsonl +196 -0
  2. 2017.jsonl +0 -0
  3. 2018.jsonl +0 -0
  4. 2019.jsonl +0 -0
  5. 2020.jsonl +0 -0
  6. 2021.jsonl +0 -0
  7. 2022.jsonl +0 -0
  8. 2023.jsonl +0 -0
  9. 2024.jsonl +0 -0
2016.jsonl ADDED
@@ -0,0 +1,196 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"year":"2016","title":"A Case Study of Complex Graph Analysis in Distributed Memory: Implementation and Optimization","authors":["GM Slota, S Rajamanickam, K Madduri"],"snippet":"... Focusing on one of the largest publicly-available hyperlink graphs (the 2012 Web Data Commons graph1, which was in- turn extracted from the open Common Crawl web corpus2), we develop parallel ... 1http://webdatacommons.org/hyperlinkgraph/ 2http://commoncrawl.org ...","url":["http://www.personal.psu.edu/users/g/m/gms5016/pub/Dist-IPDPS16.pdf"]}
2
+ {"year":"2016","title":"A Convolutional Encoder Model for Neural Machine Translation","authors":["J Gehring, M Auli, D Grangier, YN Dauphin - arXiv preprint arXiv:1611.02344, 2016"],"snippet":"... WMT'15 English-German. We use all available parallel training data, namely Europarl v7, Common Crawl and News Commentary v10 and apply the standard Moses tokenization to obtain 3.9M sentence pairs (Koehn et al., 2007). We report results on newstest2015. ...","url":["https://arxiv.org/pdf/1611.02344"]}
3
+ {"year":"2016","title":"A Deep Fusion Model for Domain Adaptation in Phrase-based MT","authors":["N Durrani, S Joty, A Abdelali, H Sajjad"],"snippet":"... test-13 993 18K 17K test-13 1169 26K 28K Table 1: Statistics of the English-German and Arabic-English training corpora in terms of Sentences and Tokens (represented in millions). ep = Europarl, cc = Common Crawl, un = United Nations ...","url":["https://www.aclweb.org/anthology/C/C16/C16-1299.pdf"]}
4
+ {"year":"2016","title":"A Large DataBase of Hypernymy Relations Extracted from the Web","authors":["J Seitner, C Bizer, K Eckert, S Faralli, R Meusel… - … of the 10th edition of the …, 2016"],"snippet":"... 3http://webdatacommons.org/framework/ 4http://commoncrawl.org ... The corpus is provided by the Common Crawl Foundation on AWS S3 as free download.6 The extraction of ... isadb/) and can be used to repeat the tuple extraction for different or newer Common Crawl releases. ...","url":["http://webdatacommons.org/isadb/lrec2016.pdf"]}
5
+ {"year":"2016","title":"A Maturity Model for Public Administration as Open Translation Data Providers","authors":["N Bel, ML Forcada, A Gómez-Pérez - arXiv preprint arXiv:1607.01990, 2016"],"snippet":"... There are techniques to mitigate the need of large quantities of parallel text, but most often at the expense of resulting translation quality. As a reference of the magnitude we can take as a standard corpus the Common Crawl corpus (Smith et al. ...","url":["http://arxiv.org/pdf/1607.01990"]}
6
+ {"year":"2016","title":"A Neural Architecture Mimicking Humans End-to-End for Natural Language Inference","authors":["B Paria, KM Annervaz, A Dukkipati, A Chatterjee… - arXiv preprint arXiv: …, 2016"],"snippet":"... We used batch normalization [Ioffe and Szegedy, 2015] while training. The various model parameters used are mentioned in Table I. We experimented with both GloVe vectors trained1 on Common Crawl dataset as well as Word2Vec vector trained2 on Google news dataset. ...","url":["https://arxiv.org/pdf/1611.04741"]}
7
+ {"year":"2016","title":"A practical guide to big data research in psychology.","authors":["EE Chen, SP Wojcik - Psychological Methods, 2016"],"snippet":"... as well as general collections, such as Amazon Web Services' Public Data Sets repository (AWS, nd, http://aws.amazon.com/public-data-sets/) which includes the 1000 Genomes Project, with full genomic sequences for 1,700 individuals, and the Common Crawl Corpus, with ...","url":["http://psycnet.apa.org/journals/met/21/4/458/"]}
8
+ {"year":"2016","title":"A semantic based Web page classification strategy using multi-layered domain ontology","authors":["AI Saleh, MF Al Rahmawy, AE Abulwafa - World Wide Web, 2016"],"snippet":"Page 1. A semantic based Web page classification strategy using multi-layered domain ontology Ahmed I. Saleh1 & Mohammed F. Al Rahmawy2 & Arwa E. Abulwafa1 Received: 3 February 2016 /Revised: 13 August 2016 /Accepted ...","url":["http://link.springer.com/article/10.1007/s11280-016-0415-z"]}
9
+ {"year":"2016","title":"A Story of Discrimination and Unfairness","authors":["A Caliskan-Islam, J Bryson, A Narayanan"],"snippet":"... power has led to high quality language models such as word2vec [7] and GloVe [8]. These language models, which consist of up to half a million unique words, are trained on billions of documents from sources such as Wikipedia, CommonCrawl, GoogleNews, and Twitter. ...","url":["https://www.securityweek2016.tu-darmstadt.de/fileadmin/user_upload/Group_securityweek2016/pets2016/9_a_story.pdf"]}
10
+ {"year":"2016","title":"A Way out of the Odyssey: Analyzing and Combining Recent Insights for LSTMs","authors":["S Longpre, S Pradhan, C Xiong, R Socher - arXiv preprint arXiv:1611.05104, 2016"],"snippet":"... All models in this paper used publicly available 300 dimensional word vectors, pre-trained using Glove on 840 million tokens of Common Crawl Data (Pennington et al., 2014), and both the word vectors and the subsequent weight matrices were trained using Adam with a ...","url":["https://arxiv.org/pdf/1611.05104"]}
11
+ {"year":"2016","title":"A Web Application to Search a Large Repository of Taxonomic Relations from the Web","authors":["S Faralli, C Bizer, K Eckert, R Meusel, SP Ponzetto"],"snippet":"... 1 https://commoncrawl.org 2 http://webdatacommons.org/framework/ 3 https://www.mongodb. com ... of the two noun phrases involved in the isa relations into pre-modifiers, head and post-modifiers [6], as well as the frequency of occurrence of the relation in the Common Crawl...","url":["http://ceur-ws.org/Vol-1690/paper58.pdf"]}
12
+ {"year":"2016","title":"Abu-MaTran at WMT 2016 Translation Task: Deep Learning, Morphological Segmentation and Tuning on Character Sequences","authors":["VM Sánchez-Cartagena, A Toral - Proceedings of the First Conference on Machine …, 2016"],"snippet":"... 362 Page 2. Corpus Sentences (k) Words (M) Europarl v8 2 121 39.5 Common Crawl 113 995 2 416.7 News Crawl 2014–15 6 741 83.1 Table 1: Finnish monolingual data, after preprocessing, used to train the LMs of our SMT submission. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2322.pdf"]}
13
+ {"year":"2016","title":"Action Classification via Concepts and Attributes","authors":["A Rosenfeld, S Ullman - arXiv preprint arXiv:1605.07824, 2016"],"snippet":"... To assign GloVe [19] vectors to object names or attributes, we use the pre-trained model on the Common-Crawl (42B) corpus, which contains a vocabulary of 1.9M words. We break up phrases into their words and assign to them their mean GloVe vector. ...","url":["http://arxiv.org/pdf/1605.07824"]}
14
+ {"year":"2016","title":"Active Content-Based Crowdsourcing Task Selection","authors":["P Bansal, C Eickhoff, T Hofmann"],"snippet":"... Pennington et. al. [28] showed distributed text representations to capture more semantic information when the models are trained on Wikipedia text, as opposed to other large corpora such as the Common Crawl. This is attributed ...","url":["https://www.researchgate.net/profile/Piyush_Bansal4/publication/305442609_Active_Content-Based_Crowdsourcing_Task_Selection/links/578f416d08ae81b44671ad85.pdf"]}
15
+ {"year":"2016","title":"Adverse Drug Reaction Classification With Deep Neural Networks","authors":["T Huynh, Y He, A Willis, S Rüger"],"snippet":"... 4http://commoncrawl.org/ 5Source code is available at https://github.com/trunghlt/ AdverseDrugReaction 879 Page 4. max pooling feedforward layer convolutional layer (a) Convolutional Neural Network (CNN) (b) Recurrent Convolutional Neural Network (RCNN) ...","url":["http://www.aclweb.org/anthology/C/C16/C16-1084.pdf"]}
16
+ {"year":"2016","title":"All Your Data Are Belong to us. European Perspectives on Privacy Issues in 'Free'Online Machine Translation Services","authors":["P Kamocki, J O'Regan, M Stauch - Privacy and Identity Management. Time for a …, 2016"],"snippet":"... http://​www.​cnet.​com/​news/​google-translate-now-serves-200-million-people-daily/​. Accessed 23 Oct 2014. Smith, JR, Saint-Amand, H., Plamada, M., Koehn, P., Callison-Burch, C., Lopez, A.: Dirt cheap web-scale parallel text from the Common Crawl...","url":["http://link.springer.com/chapter/10.1007/978-3-319-41763-9_18"]}
17
+ {"year":"2016","title":"An Analysis of Real-World XML Queries","authors":["P Hlísta, I Holubová - OTM Confederated International Conferences\" On the …, 2016"],"snippet":"... crawler. Or, there is another option – Common Crawl [1], an open repository of web crawled data that is universally accessible and analyzable, containing petabytes of data collected over the last 7 years. ... 3.1 Common Crawl. We ...","url":["http://link.springer.com/chapter/10.1007/978-3-319-48472-3_36"]}
18
+ {"year":"2016","title":"An Attentive Neural Architecture for Fine-grained Entity Type Classification","authors":["S Shimaoka, P Stenetorp, K Inui, S Riedel - arXiv preprint arXiv:1604.05525, 2016"],"snippet":"... appearing in the training set. Specifically, we used the freely available 300 dimensional cased word embeddings trained on 840 billion to- kens from the Common Crawl supplied by Pennington et al. (2014). As embeddings ...","url":["http://arxiv.org/pdf/1604.05525"]}
19
+ {"year":"2016","title":"Analysing Structured Scholarly Data Embedded in Web Pages","authors":["P Sahoo, U Gadiraju, R Yu, S Saha, S Dietze"],"snippet":"... the following section. 2.2 Methodology and Dataset For our investigation, we use the Web Data Commons (WDC) dataset, being the largest available corpus of markup, extracted from the Common Crawl. Of the crawled web ...","url":["http://cs.unibo.it/save-sd/2016/papers/pdf/sahoo-savesd2016.pdf"]}
20
+ {"year":"2016","title":"ArabicWeb16: A New Crawl for Today's Arabic Web","authors":["R Suwaileh, M Kutlu, N Fathima, T Elsayed, M Lease"],"snippet":"... English content dominates the crawl [12]. While Common Crawl could be mined to identify and ex- tract a useful Arabic subset akin to ArClueWeb09, this would address only recency, not coverage. To address the above concerns ...","url":["http://www.ischool.utexas.edu/~ml/papers/sigir16-arabicweb.pdf"]}
21
+ {"year":"2016","title":"Ask Your Neurons: A Deep Learning Approach to Visual Question Answering","authors":["M Malinowski, M Rohrbach, M Fritz - arXiv preprint arXiv:1605.02697, 2016"],"snippet":"Page 1. Noname manuscript Ask Your Neurons: A Deep Learning Approach to Visual Question Answering Mateusz Malinowski · Marcus Rohrbach · Mario Fritz Abstract We address a question answering task on realworld images that is set up as a Visual Turing Test. ...","url":["http://arxiv.org/pdf/1605.02697"]}
22
+ {"year":"2016","title":"Automated Generation of Multilingual Clusters for the Evaluation of Distributed Representations","authors":["P Blair, Y Merhav, J Barry - arXiv preprint arXiv:1611.01547, 2016"],"snippet":"... (2013a), the 840-billion token Common Crawl corpus-trained GloVe model released by Pennington et al. (2014), and the English, Spanish, German, Japanese, and Chinese MultiCCA vectors5 from Ammar et al. ... Outliers OOV GloVe Common Crawl 75.53 38.57 5 6.33 5.70 ...","url":["https://arxiv.org/pdf/1611.01547"]}
23
+ {"year":"2016","title":"Automated Haiku Generation based on Word Vector Models","authors":["AF Aji"],"snippet":"... and Page 28. 16 Chapter 3. Design Common Crawl data. Those data also come with various vector dimension size from 50-D to 300-D. Those pre-trained word vectors are used directly for this project as they take considerably ...","url":["http://project-archive.inf.ed.ac.uk/msc/20150275/msc_proj.pdf"]}
24
+ {"year":"2016","title":"Automatic Construction of Morphologically Motivated Translation Models for Highly Inflected, Low-Resource Languages","authors":["J Hewitt, M Post, D Yarowsky - AMTA 2016, Vol., 2016"],"snippet":"... sentences of Europarl (Koehn, 2005), SETIMES3 (Tyers and Alperen, 2010), extracted from OPUS (Tiedemann, 2009), or Common Crawl (Bojar et al ... Turkish, we train models on 29000 sentences of biblical data with 1000 and 20000 sentences of CommonCrawl and SETIMES ...","url":["https://www.researchgate.net/profile/John_Ortega3/publication/309765044_Fuzzy-match_repair_using_black-box_machine_translation_systems_what_can_be_expected/links/5822496f08ae7ea5be6af317.pdf#page=183"]}
25
+ {"year":"2016","title":"B1A3D2 LUC@ WMT 2016: a Bilingual1 Document2 Alignment3 Platform Based on Lucene","authors":["L Jakubina, P Langlais"],"snippet":"... 2013. Dirt cheap web-scale parallel text from the common crawl. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1374–1383. Jakob Uszkoreit, Jay M. Ponte, Ashok C. Popat, and Moshe Dubiner. 2010. ...","url":["http://www-etud.iro.umontreal.ca/~jakubinl/publication/badluc_jaklan_wmt16_stbad.pdf"]}
26
+ {"year":"2016","title":"Big Data Facilitation and Management","authors":["J Fagerli"],"snippet":"Page 1. Faculty of Science and Technology Department of Computer Science Big Data Facilitation and Management A requirements analysis and initial evaluation of a big biological data processing service — Jarl Fagerli INF ...","url":["http://bdps.cs.uit.no/papers/capstone-jarl.pdf"]}
27
+ {"year":"2016","title":"Bootstrap, Review, Decode: Using Out-of-Domain Textual Data to Improve Image Captioning","authors":["W Chen, A Lucchi, T Hofmann - arXiv preprint arXiv:1611.05321, 2016"],"snippet":"... We report the performance of our model and competing methods in terms of six standard metrics used for image captioning as described in [4]. During the bootstrap learning phase, we use both the 20082010 News-CommonCrawl and Europarl corpus 2 as out- of-domain ...","url":["https://arxiv.org/pdf/1611.05321"]}
28
+ {"year":"2016","title":"bot. zen@ EVALITA 2016-A minimally-deep learning PoS-tagger (trained for Italian Tweets)","authors":["EW Stemle"],"snippet":"... The data was only distributed to the task participants. 4.1.4 C4Corpus (w2v) c4corpus8 is a full documents Italian Web corpus that has been extracted from CommonCrawl, the largest publicly available general Web crawl to date. ...","url":["http://ceur-ws.org/Vol-1749/paper_020.pdf"]}
29
+ {"year":"2016","title":"Building mutually beneficial relationships between question retrieval and answer ranking to improve performance of community question answering","authors":["M Lan, G Wu, C Xiao, Y Wu, J Wu - Neural Networks (IJCNN), 2016 International Joint …, 2016"],"snippet":"... The first is the 300-dimensional version of word2vec [23] vectors, which is trained on part of Google News dataset (about 100 billion words). The second is 300-dimensional Glove vectors [24] which is trained on 840 billion tokens of Common Crawl data. ...","url":["http://ieeexplore.ieee.org/abstract/document/7727286/"]}
30
+ {"year":"2016","title":"C4Corpus: Multilingual Web-size corpus with free license","authors":["I Habernal, O Zayed, I Gurevych"],"snippet":"... documents. Our project is entitled C4Corpus, an abbreviation of Creative Commons from Common Crawl Corpus and is hosted under the DKPro umbrella4 at https:// github.com/dkpro/dkpro-c4corpus under ASL 2.0 license. ...","url":["https://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/publikationen/2016/lrec2016-c4corpus-camera-ready.pdf"]}
31
+ {"year":"2016","title":"Capturing Pragmatic Knowledge in Article Usage Prediction using LSTMs","authors":["J Kabbara, Y Feng, JCK Cheung"],"snippet":"... GloVe: The embedding is initialized by the global vectors Pennington et al. (2014) that are trained on the Common Crawl corpus (840 billion tokens). Both word2vec and GloVe word embeddings consist of 300 dimensions. ...","url":["https://www.aclweb.org/anthology/C/C16/C16-1247.pdf"]}
32
+ {"year":"2016","title":"Character-level and Multi-channel Convolutional Neural Networks for Large-scale Authorship Attribution","authors":["S Ruder, P Ghaffari, JG Breslin - arXiv preprint arXiv:1609.06686, 2016"],"snippet":"... σ: Standard deviation of document number. d: Median document size (tokens). All word embedding channels are initialized with 300-dimensional GloVe vectors (Pennington et al., 2014) trained on 840B tokens of the Common Crawl corpus11. ...","url":["http://arxiv.org/pdf/1609.06686"]}
33
+ {"year":"2016","title":"Citation Classification for Behavioral Analysis of a Scientific Field","authors":["D Jurgens, S Kumar, R Hoover, D McFarland… - arXiv preprint arXiv: …, 2016"],"snippet":"... The classifier is implemented using SciKit (Pedregosa et al., 2011) and syntactic processing was done using CoreNLP (Manning et al., 2014). Selectional preferences used pretrained 300-dimensional vectors from the 840B token Common Crawl (Pennington et al., 2014). ...","url":["http://arxiv.org/pdf/1609.00435"]}
34
+ {"year":"2016","title":"CNRC at SemEval-2016 Task 1: Experiments in crosslingual semantic textual similarity","authors":["C Lo, C Goutte, M Simard - Proceedings of SemEval, 2016"],"snippet":"... The system was 3We use the glm function in R. 669 Page 3. trained using standard resources – Europarl, Common Crawl (CC) and News & Commentary (NC) – totaling approximately 110M words in each language. Phrase ...","url":["http://anthology.aclweb.org/S/S16/S16-1102.pdf"]}
35
+ {"year":"2016","title":"Commonsense Knowledge Base Completion","authors":["X Li, A Taheri, L Tu, K Gimpel"],"snippet":"... We use the GloVe (Pennington et al., 2014) embeddings trained on 840 billion tokens of Common Crawl web text and the PARAGRAM-SimLex embeddings of Wieting et al. (2015), which were tuned to have strong performance on the SimLex-999 task (Hill et al., 2015). ...","url":["http://ttic.uchicago.edu/~kgimpel/papers/li+etal.acl16.pdf"]}
36
+ {"year":"2016","title":"Comparing Topic Coverage in Breadth-First and Depth-First Crawls Using Anchor Texts","authors":["AP de Vries - Research and Advanced Technology for Digital …, 2016","T Samar, MC Traub, J van Ossenbruggen, AP de Vries - International Conference on …, 2016"],"snippet":"... nl domain, with the goal to crawl websites as completes as possible. The second crawl was collected by the Common Crawl foundation using a breadth-first strategy on the entire Web, this strategy focuses on discovering as many links as possible. ...","url":["http://books.google.de/books?hl=en&lr=lang_en&id=VmTUDAAAQBAJ&oi=fnd&pg=PA133&dq=%22common+crawl%22&ots=STVgD4vke3&sig=Gr5Q94wWtvFSfT_EYf1cQGP-Mrg","http://link.springer.com/chapter/10.1007/978-3-319-43997-6_11"]}
37
+ {"year":"2016","title":"COMPARISON OF DISTRIBUTIONAL SEMANTIC MODELS FOR RECOGNIZING TEXTUAL ENTAILMENT.","authors":["Y WIBISONO, DWIH WIDYANTORO… - Journal of Theoretical & …, 2016"],"snippet":"... To our knowledge, this paper is the first study of various DSM on RTE. We found that DSM improves entailment accuracy, with the best DSM is GloVe trained with 42 billion tokens taken from Common Crawl corpus. ... Glove_42B Common Crawl 42 billion tokens ...","url":["http://search.ebscohost.com/login.aspx?direct=true&profile=ehost&scope=site&authtype=crawler&jrnl=19928645&AN=120026939&h=neaFgJXHcv5SjyzIFWIJp046Uq5Cr3qfiPCmXc4DYTEi9kN6SN9YQqm1CUdjmDg%2BwZzzXWI6ftJLniJiB6Go1g%3D%3D&crl=c"]}
38
+ {"year":"2016","title":"ConceptNet 5.5: An Open Multilingual Graph of General Knowledge","authors":["R Speer, J Chin, C Havasi - arXiv preprint arXiv:1612.03975, 2016"],"snippet":"... 2013), and the GloVe 1.2 embeddings trained on 840 billion words of the Common Crawl (Pennington, Socher, and Manning 2014). These matrices are downloadable, and we will be using them both as a point of comparison and as inputs to an ensemble. ...","url":["https://arxiv.org/pdf/1612.03975"]}
39
+ {"year":"2016","title":"Content Selection through Paraphrase Detection: Capturing different Semantic Realisations of the Same Idea","authors":["E Lloret, C Gardent - WebNLG 2016, 2016"],"snippet":"... either sentences or pred-arg structures, GLoVe pre-trained WE vectors (Pennington et al., 2014) were used, specifically the ones derived from Wikipedia 2014+ Gi- gaword 5 corpora, containing around 6 billion to- kens; and the ones derived from a Common Crawl, with 840 ...","url":["https://webnlg2016.sciencesconf.org/data/pages/book.pdf#page=33"]}
40
+ {"year":"2016","title":"Corporate Smart Content Evaluation","authors":["R Schäfermeier, AA Todor, A La Fleur, A Hasan… - 2016"],"snippet":"Page 1. Fraunhofer FOKUS FRAUNHOFER INSTITUTE FOR OPEN COMMUNICATION SYSTEMS FOKUS STUDY – CORPORATE SMART CONTENT EVALUATION Page 2. Page 3. STUDY – CORPORATE SMART CONTENT EVALUATION ...","url":["http://www.diss.fu-berlin.de/docs/servlets/MCRFileNodeServlet/FUDOCS_derivate_000000006523/CSCStudie2016.pdf"]}
41
+ {"year":"2016","title":"Crawl and crowd to bring machine translation to under-resourced languages","authors":["A Toral, M Esplá-Gomis, F Klubička, N Ljubešić… - Language Resources and …"],"snippet":"... Wikipedia. The CommonCrawl project 5 should be mentioned here as it allows researchers to traverse a frequently updated crawl of the whole web in search of specific data, and therefore bypass the data collection process. ...","url":["http://link.springer.com/article/10.1007/s10579-016-9363-6"]}
42
+ {"year":"2016","title":"Cross Site Product Page Classification with Supervised Machine Learning","authors":["J HUSS"],"snippet":"... An other data set used often is Common Crawl [1], which is a possible source that contain product specification pages. The data of Common Crawl is not complete with HTML-source code and it was collected in 2013, which creates many dead links. ...","url":["http://www.nada.kth.se/~ann/exjobb/jakob_huss.pdf"]}
43
+ {"year":"2016","title":"CSA++: Fast Pattern Search for Large Alphabets","authors":["S Gog, A Moffat, M Petri - arXiv preprint arXiv:1605.05404, 2016"],"snippet":"... The latter were extracted from a sentence-parsed prefix of the German and Spanish sections of the CommonCrawl5. The four 200 ... translation process described by Shareghi et al., corresponding to 40,000 sentences randomly selected from the German part of Common Crawl...","url":["http://arxiv.org/pdf/1605.05404"]}
44
+ {"year":"2016","title":"CUNI-LMU Submissions in WMT2016: Chimera Constrained and Beaten","authors":["A Tamchyna, R Sudarikov, O Bojar, A Fraser - Proceedings of the First Conference on …, 2016"],"snippet":"... tag). Our input is factored and contains the form, lemma, morphological tag, 1http://commoncrawl.org/ 387 Page 4. lemma ... 2015. The second LM only uses 4-grams but additionally contains the full Common Crawl corpus. We ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2325.pdf"]}
45
+ {"year":"2016","title":"D6. 3: Improved Corpus-based Approaches","authors":["CP Escartin, LS Torres, CO UoW, AZ UMA, S Pal - 2016"],"snippet":"... Based on this system and the data retrieved from Common Crawl, several websites were identified as possible candidates for crawling. ... 8http://commoncrawl.org/ 9For a description of this tool, see Section 3.1.2 in this Deliverable. 6 Page 9. ...","url":["http://expert-itn.eu/sites/default/files/outputs/expert_d6.3_20160921_improved_corpus-based_approaches.pdf"]}
46
+ {"year":"2016","title":"Data Selection for IT Texts using Paragraph Vector","authors":["MS Duma, W Menzel - Proceedings of the First Conference on Machine …, 2016"],"snippet":"... models/doc2vec.html 3http://commoncrawl.org/ 4https://github.com/melix/jlangdetect 5-gram LMs using the SRILM toolkit (Stolcke, 2002) with Kneser-Ney discounting (Kneser and Ney, 1995) on the target side of the Commoncrawl and IT corpora. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2331.pdf"]}
47
+ {"year":"2016","title":"David W. Embley, Mukkai S. Krishnamoorthy, George Nagy &","authors":["S Seth"],"snippet":"... tabulated data on the web even before Big Data became a byword [1]. Assuming “that an average table contains on average 50 facts it is possible to extract more than 600 billion facts taking into account only the 12 billion sample tables found in the Common Crawl” [2]. Tables ...","url":["https://www.ecse.rpi.edu/~nagy/PDF_chrono/2016_Converting%20Web%20Tables,IJDAR,%2010.1007_s10032-016-0259-1.pdf"]}
48
+ {"year":"2016","title":"Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translatin","authors":["J Zhou, Y Cao, X Wang, P Li, W Xu - arXiv preprint arXiv:1606.04199, 2016"],"snippet":"... 4.1 Data sets For both tasks, we use the full WMT'14 parallel corpus as our training data. The detailed data sets are listed below: • English-to-French: Europarl v7, Common Crawl, UN, News Commentary, Gigaword • English-to-German: Europarl v7, Common</","url":["http://arxiv.org/pdf/1606.04199"]}
49
+ {"year":"2016","title":"Deeper Machine Translation and Evaluation for German","authors":["E Avramidis, V Macketanz, A Burchardt, J Helcl… - 2nd Deep Machine …, 2016"],"snippet":"... corpus entries words Chromium browser 6.3 K 55.1 K Drupal 4.7 K 57.4 K Libreoffice help 46.8 K 1.1 M Libreoffice UI 35.6 K 143.7 K Ubuntu Saucy 182.9 K 1.6 M Europarl (mono) 2.2 M 54.0 M News (mono) 89M 1.7 B Commoncrawl (parallel) 2.4 M 53.6 M Europarl (parallel ...","url":["http://www.aclweb.org/anthology/W/W16/W16-64.pdf#page=35"]}
50
+ {"year":"2016","title":"DeepNNNER: Applying BLSTM-CNNs and Extended Lexicons to Named Entity Recognition in Tweets","authors":["F Dugas, E Nichols - WNUT 2016, 2016"],"snippet":"... 850M words) 50 130K 51.43% 75.31% 84.63% 82.18% GloVe 6B Gigaword5+ Wikipedia (6B words) 50 400k 54.22% 82.71% 86.02% 84.17% GloVe 27B Twitter microposts (27B words) 50 1.2 M 57.47% 90.47% 83.67% 97.66% GloVe 42B Common Crawl (42B words) 300 1.9 ...","url":["http://www.aclweb.org/anthology/W/W16/W16-39.pdf#page=190"]}
51
+ {"year":"2016","title":"Detecting Opinion Polarities using Kernel Methods","authors":["R Kaljahi, J Foster - PEOPLES 2016, 2016"],"snippet":"... 0.7, 0.8, 0.9}. The pre-trained word vectors used are the publicly available ones trained using GloVe (Pennington et al., 2014) trained on 42B-token corpus of Common Crawl (1.9 M vocabulary) with 300 dimensions. 4 4.1 Word ...","url":["http://www.aclweb.org/anthology/W/W16/W16-43.pdf#page=74"]}
52
+ {"year":"2016","title":"DEVELOPMENT AND APPLICATION OF A STAGE-GATE PROCESS TO REDUCE THE UNERLYING RISKS OF IT SERVICE PROJECTS","authors":["E JEONG, SR JEONG, MS RAO, VV KUMAR… - Journal of Theoretical and …, 2016"],"snippet":"... RTE. To our knowledge, this paper is the first study of various DSM on RTE. We found that DSM improves entailment accuracy, with the best DSM is GloVe trained with 42 billion tokens taken from Common Crawl corpus. We ...","url":["http://jatit.org/volumes/ninetythree2.php"]}
53
+ {"year":"2016","title":"DFKI's system for WMT16 IT-domain task, including analysis of systematic errors","authors":["E Avramidis, A Burchardt, V Macketanz, A Srivastava - Proceedings of the First …, 2016"],"snippet":"... Europarl (mono) 2.2M 54.0M News (mono) 89M 1.7B Commoncrawl (parallel) 2.4M 53.6M Europarl (parallel) 1.9M 50.1M MultiUN (parallel) 167.6K 5.8M News Crawl (parallel) 201.3K 5.1M Table 1: Size of corpora used for SMT. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2329.pdf"]}
54
+ {"year":"2016","title":"Dialogue Act Classification in Domain-Independent Conversations Using a Deep Recurrent Neural Network","authors":["H Khanpour, N Guntakandla, R Nielsen"],"snippet":"... Word2vec embeddings were learned from Google News (Mikolov et al., 2013), and separately, from Wikipedia1. The Glove embeddings were pretrained on the 840 billion token Common Crawl corpus. ... Word2vec GoogleNews 300 71.32 Glove CommonCrawl 75 69.28 ...","url":["http://www.aclweb.org/anthology/C/C16/C16-1189.pdf"]}
55
+ {"year":"2016","title":"Discontinuous Verb Phrases in Parsing and Machine Translation of English and German","authors":["S Loaiciga Sanchez, K Gulordava - 2016","S Loáiciga, K Gulordava"],"snippet":"... all 2https://translate.google.com/ 2841 Page 5. monolingual data (Heafield et al., 2013). Approximately 4.5 million sentences were used for training, combining Europarlv7, CommonCrawl and News data. Optimization weights ...","url":["http://archive-ouverte.unige.ch/unige:84454/ATTACHMENT01","http://www.lrec-conf.org/proceedings/lrec2016/pdf/628_Paper.pdf"]}
56
+ {"year":"2016","title":"Discovering Disease-associated Drugs Using Web Crawl Data","authors":["H Kim, S Park - 2016"],"snippet":"... System Online) database is used for the biomedical literature. 81.7GB sized text data of Common Crawl which Amazon Web Services hosts is used for the web crawl data. Gene symbol, Disease name and Drug name which ...","url":["http://delab.yonsei.ac.kr/files/paper/Kim_ACMSAC2016_CR.PDF"]}
57
+ {"year":"2016","title":"Discriminating between similar languages and arabic dialect identification: A report on the third dsl shared task","authors":["S Malmasi, M Zampieri, N Ljubešic, P Nakov, A Ali… - Proceedings of the 3rd …, 2016"],"snippet":"... PITEOG used their own custom web-based corpus, with no further details provided. • SUKI created an additional dataset using web pages in the Common Crawl corpus. 5 Results for Subtask 2: Arabic Dialect Identification ...","url":["https://pdfs.semanticscholar.org/7478/2d315d5cd472bef874ffbca589cc2285a99f.pdf"]}
58
+ {"year":"2016","title":"Distributed Graph Storage And Querying System","authors":["J Balaji - 2016"],"snippet":"... generation and consumption of such interrelated data. The number of Facebook users grew from 500 million in 2010 to 1.5 billion in 2014 [1]. The Common Crawl web corpora [2] covered for 2012 contains 3.5 billion web pages and 128 billion hyperlinks between these ...","url":["http://scholarworks.gsu.edu/cgi/viewcontent.cgi?article=1112&context=cs_diss"]}
59
+ {"year":"2016","title":"Distributed Platforms and Cloud Services: Enabling Machine Learning for Big Data","authors":["D Pop, G Iuhasz, D Petcu - Data Science and Big Data Computing, 2016"],"snippet":"... Data sets are growing faster, being common now to reach numbers of 100 TB or more. The Sloan Digital Sky Survey occupies 5 TB of storage, the Common Crawl Web corpus is 81 TB in size, and the 1000 Genomes Project requires 200 TB of space, just to name a few. ...","url":["http://link.springer.com/chapter/10.1007/978-3-319-31861-5_7"]}
60
+ {"year":"2016","title":"DRAFT: Interoperation Among Web Archiving Technologies","authors":["DSH Rosenthal, N Taylor, J Bailey - 2016"],"snippet":"... WET (WARC Encapsulated Text, or Parsed Text), specified by Common Crawl and Internet Archive respectively, to represent only the text extracted from Web resources. 3 Page 4. ... [12] Common CrawlCommon Crawl. https:// commoncrawl.org/.","url":["http://www.lockss.org/tmp/Interoperation2016.pdf"]}
61
+ {"year":"2016","title":"Dual Learning for Machine Translation","authors":["Y Xia, D He, T Qin, L Wang, N Yu, TY Liu, WY Ma - arXiv preprint arXiv:1611.00179, 2016"],"snippet":"... In detail, we used the same bilingual corpora from WMT'14 as used in [1, 6], which contains 12M sentence pairs extracting from five datasets: Europarl v7, Common Crawl corpus, UN corpus, News Commentary, and 109French-English corpus. ...","url":["https://arxiv.org/pdf/1611.00179"]}
62
+ {"year":"2016","title":"Dynamic Coattention Networks For Question Answering","authors":["C Xiong, V Zhong, R Socher - arXiv preprint arXiv:1611.01604, 2016"],"snippet":"... We use as GloVe word vectors pretrained on the 840B Common Crawl corpus (Pennington et al., 2014). We limit the vocabulary to words that are present in the Common Crawl corpus and set embeddings for out-of-vocabulary words to zero. ...","url":["https://arxiv.org/pdf/1611.01604"]}
63
+ {"year":"2016","title":"Edinburgh's Statistical Machine Translation Systems for WMT16","authors":["P Williams, R Sennrich, M Nadejde, M Huck, B Haddow… - Proceedings of the First …, 2016"],"snippet":"... Our final system used two different countbased 5-gram language models (one trained on all data, including the WMT16 Romanian CommonCrawl corpus, without pruning, and one trained on news2015 monolingual only), a neural language model trained on news2015 ...","url":["http://www.statmt.org/wmt16/pdf/W16-2327.pdf"]}
64
+ {"year":"2016","title":"Efficient construction of metadata-enhanced web corpora","authors":["A Barbaresi - ACL 2016, 2016"],"snippet":"... 3.3 Extraction I designed a text extraction targeting specifically WordPress pages, which is transferable to a whole range of self-hosted websites using WordPress, al- lowing to reach various blogger profiles thanks 8http://commoncrawl. org 9https://github. ...","url":["http://iiegn.eu/assets/outputs/WAC-X:2016.pdf#page=17"]}
65
+ {"year":"2016","title":"Efficient Data Selection for Bilingual Terminology Extraction from Comparable Corpora","authors":["A Hazem, E Morin"],"snippet":"... We used the FrenchEnglish aligned version at OPUS provided by JRC (Tiedemann, 2012). Common crawl corpus (CC) is a petabytes of data collected over 7 years of web crawling set of raw web page data and text extracts5. ...","url":["http://www.aclweb.org/anthology/C/C16/C16-1321.pdf"]}
66
+ {"year":"2016","title":"Electronic Commerce Meets the Semantic Web","authors":["J Jovanovic, E Bagheri - IT Professional, 2016"],"snippet":"... The latest Common Crawl corpus (Winter 2014; http://commoncrawl.org) consists of 2.01 billion HTML pages collected from more than 15.68 million pay-level domains (PLDs). An analysis of this corpus shows that 30 percent ...","url":["http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7535074"]}
67
+ {"year":"2016","title":"Embedded Sensors System Applied to Wearable Motion Analysis in Sports","authors":["A Valade, A Costes, A Bouillod, M Mangin, P Acco… - 2016"],"snippet":"... the general aspect is good: the mean error is 2.5 degrees and the standard deviation is of 6 degrees. In a second time, we tested the system behaviour on a common crawl swimming movement to ensure the functionality on complex actions. 5.2.2 Tests on Sportsmen ...","url":["https://www.researchgate.net/profile/Anthony_Bouillod/publication/301721501_Embedded_Sensors_System_Applied_to_Wearable_Motion_Analysis_in_Sports/links/572dc45708aee022975a5858.pdf"]}
68
+ {"year":"2016","title":"Enabling Network Security Through Active DNS Datasets","authors":["A Kountouras, P Kintis, C Lever, Y Chen, Y Nadji… - Research in Attacks, …, 2016","Y Nadji, D Dagon, M Antonakakis, R Joffe - … in Attacks, Intrusions, and Defenses: 19th …, 2016"],"snippet":"... These include but are not limited to Public Blacklists, the Alexa list, the Common Crawl project, and various Top Level Domain (TLD) zone files. More specifically, we are using the zone files that are published daily by the administrators of the zones for com, net, biz and org. ...","url":["http://www.cc.gatech.edu/~ynadji3/docs/pubs/activedns.pdf"]}
69
+ {"year":"2016","title":"Engineering a Distributed Full-Text Index","authors":["J Fischer, F Kurpicz, P Sanders - arXiv preprint arXiv:1610.03332, 2016"],"snippet":"... For our experiments we use the common crawl corpus as input.1 It provides world wide web crawl data and contains raw content, text only and metadata of the crawled websites from about 1.23 billion web pages. In total the corpus has a size of 541 TB (as of 27.07.2016). ...","url":["https://arxiv.org/pdf/1610.03332"]}
70
+ {"year":"2016","title":"Engineering top-k document retrieval systems based on succinct data structures","authors":["S Gog, P Sanders"],"snippet":"... relevant documents to a given query”. For large sets of documents, eg web crawls like gov2, clueweb, or CommonCrawl, time-efficient solutions rely on precomputed index data structures. For collections of natural language ...","url":["https://formal.iti.kit.edu/teaching/projektgruppe/themen/WiSe1617/sanders16.pdf"]}
71
+ {"year":"2016","title":"Enriching Product Ads with Metadata from HTML Annotations","authors":["P Ristoski, P Mika - The Semantic Web. Latest Advances and New …, 2016"],"snippet":"... Offers - WDC Microdata Dataset: The latest extraction of WebDataCommons includes over 5 billion entities marked up by one of the three main HTML markup languages (ie, Microdata, Microformats and RDFa) and has been retrieved from the CommonCrawl 2014 corpus 5 ...","url":["http://link.springer.com/chapter/10.1007/978-3-319-34129-3_10"]}
72
+ {"year":"2016","title":"Examining the Relationship between Preordering and Word Order Freedom in Machine Translation","authors":["J Daiber, M Stanojevic, W Aziz, K Sima'an"],"snippet":"Page 1. Examining the Relationship between Preordering and Word Order Freedom in Machine Translation Joachim Daiber Miloš Stanojevic Wilker Aziz Khalil Sima'an Institute for Logic, Language and Computation (ILLC) University of Amsterdam {initial.last}@uva.nl Abstract ...","url":["http://jodaiber.github.io/doc/wmt2016.pdf"]}
73
+ {"year":"2016","title":"Explorations in Identifying and Summarizing Subjective Content in Text","authors":["P Kumar, V Venugopal"],"snippet":"... puted word embeddings using pre-trained word vectors from GloVe [14] (300-dimensional vectors trained on the 840B token CommonCrawl dataset) to encode our input tokens for both the opinion identification and the opinion summarization task. ...","url":["http://cs224d.stanford.edu/reports/poorna.pdf"]}
74
+ {"year":"2016","title":"Exploring Corpora","authors":["C Barrière - Natural Language Understanding in a Semantic Web …, 2016"],"snippet":"Page 1. Chapter 5 Exploring Corpora In previous chapters, we have worked with very small corpora. Some were even constructed manually to specifically illustrate a particular point, and they have generally contained ten to twenty sentences each. ...","url":["http://link.springer.com/chapter/10.1007/978-3-319-41337-2_5"]}
75
+ {"year":"2016","title":"Exploring the Application Potential of Relational Web Tables","authors":["C Bizer"],"snippet":"... crawls. This situation has changed in 2012 with the University of Mannheim [7] and in 2014 with the Dresden University of Technology [3] starting to extract web table corpora from the CommonCrawl, a large public web corpus. ...","url":["http://ceur-ws.org/Vol-1670/paper-07.pdf"]}
76
+ {"year":"2016","title":"Fast Connected Components Computation in Large Graphs by Vertex Pruning","authors":["A Lulli, E Carlini, P Dazzi, C Lucchese, L Ricci"],"snippet":"... Consider, for instance, the Web graph (Common Crawl provides 3.5 billion pages with 128 billion hyperlinks[1]), the Linked Open Data datasets (the LOD2 project indexes for 5.7 billion triples/edges [2]) or the Facebook and Twitter social networks (respectively 1.35 billion and ...","url":["http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7515231"]}
77
+ {"year":"2016","title":"Fast Gated Neural Domain Adaptation: Language Model as a Case Study","authors":["J Zhang, X Wu, A Way, Q Liu"],"snippet":"... The glove 6b model is trained on Wikipedia data and the English Gigaword Fifth Edition corpus;7 the glove 42b model is trained on the Common Crawl data; and the glove 840b model is trained on the the Common Crawl and additional web data. ...","url":["http://www.computing.dcu.ie/~away/PUBS/2016/Fast_Gated_Neural_Domain_Adaptation.pdf"]}
78
+ {"year":"2016","title":"Fast, Small and Exact: Infinite-order Language Modelling with Compressed Suffix Trees","authors":["E Shareghi, M Petri, G Haffari, T Cohn - arXiv preprint arXiv:1608.04465, 2016"],"snippet":"Page 1. Fast, Small and Exact: Infinite-order Language Modelling with Compressed Suffix Trees Ehsan Shareghi,♭ Matthias Petri,♮ Gholamreza Haffari♭ and Trevor Cohn♮ ♭ Faculty of Information Technology, Monash University ...","url":["http://arxiv.org/pdf/1608.04465"]}
79
+ {"year":"2016","title":"FBK HLT-MT Participation in the 1st Translation Memory Cleaning Shared Task","authors":["D Ataman, MJ Sabet, M Turchi, M Negri"],"snippet":"... The English-German corpus was formed using KDE4, GNOME, OpenOffice, PHP, Ubuntu, Tatoeba (Tiedemann, 2012), Europarl v.7 (Koehn, 2005), CommonCrawl (WMT 2013) (Bojar et al., 2013), News Commentary v.11 (WMT 2015) (Bojar et al., 2015), MultiUN (Eisele and ...","url":["http://rgcl.wlv.ac.uk/wp-content/uploads/2016/05/fbkhltmt-workingnote.pdf"]}
80
+ {"year":"2016","title":"Findings of the 2016 Conference on Machine Translation (WMT16)","authors":["O Bojar, R Chatterjee, C Federmann, Y Graham…"],"snippet":"... Some training corpora were identical from last year (Europarl3, United Nations, French-English 109 corpus, Common Crawl, Russian-English parallel data provided by Yandex, Wikipedia Headlines provided by CMU) and ... Monolingual data sets from CommonCrawl (Buck et al ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2301.pdf","https://cris.fbk.eu/bitstream/11582/307240/1/W16-2301.pdf"]}
81
+ {"year":"2016","title":"Findings of the WMT 2016 Bilingual Document Alignment Shared Task","authors":["C Buck, P Koehn - Proceedings of the First Conference on Machine …, 2016"],"snippet":"... Espl`a-Gomis, 2009). 8NLP4TM 2016: Shared task http://rgcl.wlv.ac.uk/nlp4tm2016/ shared-task/ 9http://commoncrawl.org/ 10https://sourceforge.net/p/bitextor/wiki/Home/ 555 Page 3. 3 Training and Test Data We made available ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2347.pdf"]}
82
+ {"year":"2016","title":"Finki at SemEval-2016 Task 4: Deep Learning Architecture for Twitter Sentiment Analysis","authors":["D Stojanovski, G Strezoski, G Madjarov, I Dimitrovski - Proceedings of SemEval, 2016"],"snippet":"... Our system finki, employs both convolutional and gated recurrent neural networks to obtain a more diverse tweet representation. The network is trained on top of GloVe word embeddings pre-trained on the Common Crawl dataset. ... 154 Page 2. Crawl dataset. ...","url":["http://m-mitchell.com/NAACL-2016/SemEval/pdf/SemEval23.pdf"]}
83
+ {"year":"2016","title":"Gathering Alternative Surface Forms for DBpedia Entities","authors":["V Bryl, C Bizer, H Paulheim"],"snippet":"... Surface forms have been extracted in a number of works from Wikipedia labels, redirects, disambiguations and anchor texts of internal Wikipedia links, which we complement with anchor texts of external Wikipedia links from the Common Crawl web corpus. ...","url":["http://ceur-ws.org/Vol-1581/paper2.pdf"]}
84
+ {"year":"2016","title":"Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation","authors":["Y Wu, M Schuster, Z Chen, QV Le, M Norouzi… - arXiv preprint arXiv: …, 2016"],"snippet":"Page 1. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi yonghui,schuster,zhifengc,qvl,[email protected] ...","url":["http://arxiv.org/pdf/1609.08144"]}
85
+ {"year":"2016","title":"Grammatical Error Correction: Machine Translation and Classifiers","authors":["A Rozovskaya, D Roth - Urbana"],"snippet":"... Many teams also used native English datasets. The most common ones are the Web1T corpus (Brants and Franz, 2006), the CommonCrawl dataset, which is similar to Web1T, and the English Wikipedia. Several teams used off-the-shelf spellcheckers. ...","url":["http://cogcomp.cs.illinois.edu/papers/RozovskayaRo16.pdf"]}
86
+ {"year":"2016","title":"Guided Alignment Training for Topic-Aware Neural Machine Translation","authors":["W Chen, E Matusov, S Khadivi, JT Peter - arXiv preprint arXiv:1607.01628, 2016"],"snippet":"... We first trained a baseline NMT model on English-French WMT data (common-crawl, Europarl v7, and news commentary corpora) for two epochs to get the best result on a development set, and then continued training the same model on the in-domain training set for a few ...","url":["http://arxiv.org/pdf/1607.01628"]}
87
+ {"year":"2016","title":"Guided Neural Machine Translation","authors":["F Stahlberg"],"snippet":"Page 1. Guided Neural Machine Translation Felix Stahlberg Department of Engineering University of Cambridge This first year report is submitted as part of the degree of Doctor of Philosophy Queens' College August 2016 Page 2. Page 3. Declaration ...","url":["http://xilef-software.e8u.de/sites/default/files/store/firstyear/firstyear-sgnmt.pdf"]}
88
+ {"year":"2016","title":"HeLI, a Word-Based Backoff Method for Language Identification","authors":["T Jauhiainen, K Lindén, H Jauhiainen - Proceedings of the VarDial Workshop, 2016"],"snippet":"... We collected from the Common Crawl 3 corpus all the web pages from the respective domains as in Table 2. When language models were created directly from the pages, the accuracy on the DSL development corpus was 49.86%, which was much ... 3http://commoncrawl.org/ ...","url":["http://web.science.mq.edu.au/~smalmasi/vardial3/pdf/VarDial320.pdf"]}
89
+ {"year":"2016","title":"Hybrid Morphological Segmentation for Phrase-Based Machine Translation","authors":["SA Grönroos, S Virpioja, M Kurimo - Proceedings of the First Conference on Machine …, 2016"],"snippet":"... Due to TheanoLM limitations, only the Europarl and News data (but not Common Crawl) were used for training. ... As monolingual data, we used the Finnish side of Europarl-v8, news.2014.fi.shuffled.v2, news.2015.fi.shuffled and Common Crawl...","url":["http://www.aclweb.org/anthology/W/W16/W16-2312.pdf"]}
90
+ {"year":"2016","title":"IIT Bombay's English-Indonesian submission at WAT: Integrating Neural Language Models with SMT","authors":["SSAKP Bhattacharyya - WAT 2016, 2016"],"snippet":"... Since Commoncrawl provides raw data by web scraping, the Indonesian data obtained was cleaned for noisy sentences and then tokenized and truecased for training the language model. ... statmt. org/europarl/ 6 http://commoncrawl. org/ 70 Page 85. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-46.pdf#page=82"]}
91
+ {"year":"2016","title":"Impact of Data Placement on Resilience in Large-Scale Object Storage Systems","authors":["P Carns, K Harms, J Jenkins, M Mubarak, R Ross…"],"snippet":"... Figure 2 compares the distribution of file sizes across three different example data populations.1 The first two (the contents of the 1000 Genomes [19] catalog of gene sequencing data and the Common Crawl Corpus [20] catalog of web crawler data) are available as Amazon ...","url":["http://storageconference.us/2016/Papers/ImpactOfDataPlacement.pdf"]}
92
+ {"year":"2016","title":"Improving Translation Selection with Supersenses","authors":["H Tang, D Xiong, OL de Lacalle, E Agirre"],"snippet":"... We set the Gaussian prior to 1 to avoid overfitting. 4There are 12 subcorpora: commoncrawl, europarl, kde4, news2007, news2008, news2009, news2010, news2011, news2012, newscommentary, openoffice, un 5http://homepages.inf.ed.ac.uk/lzhang10/maxenttoolkit.html ...","url":["http://www.aclweb.org/anthology/C/C16/C16-1293.pdf"]}
93
+ {"year":"2016","title":"INSIGHT-1 at SemEval-2016 Task 5: Deep Learning for Multilingual Aspect-based Sentiment Analysis","authors":["S Ruder12, P Ghaffari, JG Breslin - Proceedings of SemEval, 2016"],"snippet":"... respective task. English word embeddings are initialized with 300-dimensional GloVe vectors (Pennington et al., 2014) trained on 840B tokens of the Common Crawl corpus for the unconstrained submission. Word embeddings ...","url":["http://www.aclweb.org/anthology/S/S16/S16-1053.pdf"]}
94
+ {"year":"2016","title":"Is an Image Worth More than a Thousand Words? On the Fine-Grain Semantic Differences between Visual and Linguistic Representations","authors":["G Collell, MF Moens"],"snippet":"... 3.2 Word Embeddings We employ 300-dimensional GloVe vectors (Pennington et al., 2014) pre-trained in the largest available corpus (840B tokens and a 2.2M words vocabulary from Common Crawl corpus) from the author's website1. ...","url":["http://www.aclweb.org/anthology/C/C16/C16-1264.pdf"]}
95
+ {"year":"2016","title":"JU-USAAR: A Domain Adaptive MT System","authors":["K Pahari, A Kuila, S Pal, SK Naskar, S Bandyopadhyay… - Proceedings of the First …, 2016"],"snippet":"... In this task, the information technology (IT) do- main English–German parallel corpus released in the WMT-2016 IT-domain shared task serves as the in-domain data and the Europarl, News and Common Crawl English–German parallel corpus released in the Translation Task ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2333.pdf"]}
96
+ {"year":"2016","title":"Language Models with GloVe Word Embeddings","authors":["V Makarenkov, B Shapira, L Rokach - arXiv preprint arXiv:1610.03759, 2016"],"snippet":"... Despite the huge size of the Common Crawl corpus, some words may not exist with the embeddings, so we set these words to random vectors, and use the same embeddings consistently if we encounter the same unseen word again in the text. ...","url":["https://arxiv.org/pdf/1610.03759"]}
97
+ {"year":"2016","title":"Language Semantic Embeddings in Deep Visual Representation","authors":["Y Chen - 2016"],"snippet":"... As for the final system (shown in Figure 1.1), word representation is learned based on pre-trained GloVe model with Common Crawl data [5]. 2. Examine how to learn visual representation from weakly supervised dataset (Pixabay is the image data source) with ConvNets. ...","url":["http://www.nada.kth.se/~ann/exjobb/yanbei_chen.pdf"]}
98
+ {"year":"2016","title":"Large-scale evaluation of splicing localization algorithms for web images","authors":["M Zampoglou, S Papadopoulos, Y Kompatsiaris - Multimedia Tools and Applications, 2016"],"snippet":"... In reality, especially for the Web-based forensics case, despite the recent proliferation of PNG files, JPEG remains the norm: it is indicative that among the contents of the Common Crawl corpus,2 87 % of identifiable image suffixes correspond to JPEG (.jpg, .jpeg). ...","url":["http://link.springer.com/article/10.1007/s11042-016-3795-2"]}
99
+ {"year":"2016","title":"Latent Space Inference of Internet-Scale Networks","authors":["Q Ho, J Yin, EP Xing - Journal of Machine Learning Research, 2016"],"snippet":"Page 1. Journal of Machine Learning Research 17 (2016) 1-41 Submitted 4/15; Published 4/16 Latent Space Inference of Internet-Scale Networks Qirong Ho∗ [email protected] Institute for Infocomm Research A*STAR Singapore 138632 Junming Yin∗ ...","url":["http://www.jmlr.org/papers/volume17/15-142/15-142.pdf"]}
100
+ {"year":"2016","title":"Learning to recognise named entities in tweets by exploiting weakly labelled data","authors":["KJ Espinosa, R Batista-Navarro, S Ananiadou - WNUT 2016, 2016"],"snippet":"... Pre-trained Word Embeddings Description Text Type Twitter 2B tweets, 27B tokens, 1.2 M vo- cabulary words, uncased, 100 di- mensions Tweets Common Crawl 840B tokens, 2.2 M vocabulary words, cased, 300 dimensions Web Pages Wikipedia 2014+ Gigaword 5 6B tokens ...","url":["http://www.aclweb.org/anthology/W/W16/W16-39.pdf#page=165"]}
101
+ {"year":"2016","title":"Learning to refine text based recommendations","authors":["Y Gu, T Lei, R Barzilay, T Jaakkola"],"snippet":"... Word Vectors: For the ingredient/product prediction task, we used the GloVe pre-trained vectors (Common Crawl, 42 billion tokens, 300dimensional) (Pennington et al., 2014). The word vectors for the AskUbuntu vectors are pre-trained using the AskUbuntu and Wikipedia ...","url":["https://people.csail.mit.edu/taolei/papers/emnlp16_recommendation.pdf"]}
102
+ {"year":"2016","title":"Learning to translate from graded and negative relevance information","authors":["L Jehl, S Riezler"],"snippet":"Page 1. Learning to translate from graded and negative relevance information Laura Jehl Computational Linguistics Heidelberg University 69120 Heidelberg, Germany [email protected] Stefan Riezler Computational ...","url":["https://pdfs.semanticscholar.org/79ee/9b20f0776affab912a3528d604e152cc1217.pdf"]}
103
+ {"year":"2016","title":"Lexical Coherence Graph Modeling Using Word Embeddings","authors":["M Mesgar, M Strube - Proceedings of NAACL-HLT, 2016"],"snippet":"... 1971). We use a pretrained model of GloVe for word embeddings. This model is trained on Common Crawl with 840B tokens, 2.2M vocabulary. We represent each word by a vector with length 300 (Pennington et al., 2014). For ...","url":["http://www.aclweb.org/anthology/N/N16/N16-1167.pdf"]}
104
+ {"year":"2016","title":"LIMSI@ WMT'16: Machine translation of news","authors":["A Allauzen, L Aufrant, F Burlot, E Knyazeva… - Proc. of the ACL 2016 First …, 2016"],"snippet":"... Having noticed many sentence alignment errors and out-of-domain parts in the Russian common-crawl parallel corpus, we have used a bilingual sentence aligner3 and proceeded to a domain adaptation filtering using the same procedure as for monolingual data (see ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2304.pdf"]}
105
+ {"year":"2016","title":"Log-linear Combinations of Monolingual and Bilingual Neural Machine Translation Models for Automatic Post-Editing","authors":["M Junczys-Dowmunt, R Grundkiewicz - arXiv preprint arXiv:1605.04800, 2016"],"snippet":"... 4. The German monolingual common crawl corpus — a very large resource of raw German text from the Common Crawl project — admissible for the WMT-16 news translation and IT translation tasks. 3.2 Preand post-processing ...","url":["http://arxiv.org/pdf/1605.04800"]}
106
+ {"year":"2016","title":"Lurking Malice in the Cloud: Understanding and Detecting Cloud Repository as a Malicious Service","authors":["X Liao, S Alrwais, K Yuan, L Xing, XF Wang, S Hao… - Proceedings of the 2016 …, 2016"],"snippet":"... Running the scanner over all the data collected by the Common Crawl [?], which indexed five billion web pages, for those associated with all major cloud storage providers (including Amazon S3, Cloudfront, Google Drive, etc.), we found around 1 million sites utilizing 6,885 ...","url":["http://dl.acm.org/citation.cfm?id=2978349"]}
107
+ {"year":"2016","title":"Machine Translation Quality and Post-Editor Productivity","authors":["M Sanchez-Torron, P Koehn - AMTA 2016, Vol., 2016"],"snippet":"... corresponding Spanish human reference translations. We trained nine MT systems with training data from the European Parliament proceedings, News Commentary, Common Crawl, and United Nations. The systems are phrase ...","url":["https://www.researchgate.net/profile/John_Ortega3/publication/309765044_Fuzzy-match_repair_using_black-box_machine_translation_systems_what_can_be_expected/links/5822496f08ae7ea5be6af317.pdf#page=22"]}
108
+ {"year":"2016","title":"Machine Translation Through Learning From a Communication Game","authors":["D He, Y Xia, T Qin, L Wang, N Yu, T Liu, WY Ma - Advances In Neural Information …, 2016"],"snippet":"... In detail, we used the same bilingual corpora from WMT'14 as used in [1, 5], which contains 12M sentence pairs extracting from five datasets: Europarl v7, Common Crawl corpus, UN corpus, News Commentary, and 109French-English corpus. ...","url":["http://papers.nips.cc/paper/6468-machine-translation-through-learning-from-a-communication-game.pdf"]}
109
+ {"year":"2016","title":"Measuring semantic similarity of words using concept networks","authors":["G Recski, E Iklódi, K Pajkossy, A Kornai"],"snippet":"... We extend this set of models with GloVe vectors4 (Pennington et al., 2014), trained on 840 billion tokens of Common Crawl data5, and the two word embeddings mentioned in Section 1 that have recently been evaluated on the SimLex dataset: the 500-dimension SP model6 ...","url":["http://www.kornai.com/Papers/wordsim.pdf"]}
110
+ {"year":"2016","title":"Models and Inference for Prefix-Constrained Machine Translation","authors":["J Wuebker, S Green, J DeNero, S Hasan, MT Luong"],"snippet":"... The English-French bilingual training data consists of 4.9M sentence pairs from the Common Crawl and Europarl corpora from WMT 2015 (Bo- jar et al., 2015). The LM was estimated from the target side of the bitext. For English-German we run large-scale experiments. ...","url":["http://nlp.stanford.edu/pubs/wuebker2016acl_prefix.pdf"]}
111
+ {"year":"2016","title":"Multi-cultural Wikipedia mining of geopolitics interactions leveraging reduced Google matrix analysis","authors":["KM Frahm, SE Zant, K Jaffrès-Runser… - arXiv preprint arXiv: …, 2016"],"snippet":"... At present directed networks of real systems can be very large (about 4.2 million articles for the English Wikipedia edition in 2013 [13] or 3.5 billion web pages for a publicly ac- cessible web crawl that was gathered by the Common Crawl Foundation in 2012 [18]). ...","url":["https://arxiv.org/pdf/1612.07920"]}
112
+ {"year":"2016","title":"Multi-Perspective Context Matching for Machine Comprehension","authors":["Z Wang, H Mi, W Hamza, R Florian - arXiv preprint arXiv:1612.04211, 2016"],"snippet":"... jpurkar et al., 2016). To initialize the word embeddings in the word representation layer, we use the 300-dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus (Pennington et al., 2014). For the out ...","url":["https://arxiv.org/pdf/1612.04211"]}
113
+ {"year":"2016","title":"N-gram language models for massively parallel devices","authors":["N Bogoychev, A Lopez"],"snippet":"... benchmark task computes perplexity on data ex- tracted from the Common Crawl dataset used for the 2013 Workshop on Machine Translation, which ... statmt.org/moses/RELEASE-3.0/models/ fr- en/lm/europarl.lm.1 7http://www.statmt.org/wmt13/training-parallelcommoncrawl.tgz ...","url":["http://homepages.inf.ed.ac.uk/s1031254/publications/n-gram-language.pdf"]}
114
+ {"year":"2016","title":"Neural Architectures for Fine-grained Entity Type Classification","authors":["S Shimaoka, P Stenetorp, K Inui, S Riedel - arXiv preprint arXiv:1606.01341, 2016"],"snippet":"... Rocktäschel et al., 2015). For this purpose, we used the freely available 300-dimensional cased word embeddings trained on 840 billion tokens from the Common Crawl supplied by Pennington et al. (2014). For words not present ...","url":["http://arxiv.org/pdf/1606.01341"]}
115
+ {"year":"2016","title":"Neural Interactive Translation Prediction","authors":["R Knowles, P Koehn - AMTA 2016, Vol., 2016"],"snippet":"... The data consists of a 115 million word parallel corpus (Europarl, News Commentary, CommonCrawl), 3http://www. statmt. ... and about 75 billion words of additional English monolingual data (LDC Gigaword, monolingual news, monolingual CommonCrawl). ...","url":["https://www.researchgate.net/profile/John_Ortega3/publication/309765044_Fuzzy-match_repair_using_black-box_machine_translation_systems_what_can_be_expected/links/5822496f08ae7ea5be6af317.pdf#page=113"]}
116
+ {"year":"2016","title":"Neural Machine Translation with Pivot Languages","authors":["Y Cheng, Y Liu, Q Yang, M Sun, W Xu - arXiv preprint arXiv:1611.04928, 2016"],"snippet":"... We use the statistical significance test with paired bootstrap resampling [Koehn, 2004]. Table 1 shows the Spanish-English and English-French corpora from WMT which include Common Crawl, News Commentary, Europarl v7 and UN. ...","url":["https://arxiv.org/pdf/1611.04928"]}
117
+ {"year":"2016","title":"Neural Machine Translation with Recurrent Attention Modeling","authors":["Z Yang, Z Hu, Y Deng, C Dyer, A Smola - arXiv preprint arXiv:1607.05108, 2016"],"snippet":"... 3 Experiments & Results 3.1 Data sets We experiment with two data sets: WMT EnglishGerman and NIST Chinese-English. • English-German The German-English data set contains Europarl, Common Crawl and News Commentary corpus. ...","url":["http://arxiv.org/pdf/1607.05108"]}
118
+ {"year":"2016","title":"Neural Network-based Word Alignment through Score Aggregation","authors":["J Legrand, M Auli, R Collobert - arXiv preprint arXiv:1606.09560, 2016"],"snippet":"... For LSE, we set r = 1 in (4). We initialize the word embeddings with a simple PCA computed over the matrix of word co- occurrence counts (Lebret and Collobert, 2014). The co-occurrence counts were computed over the common crawl corpus provided by WMT16. ...","url":["http://arxiv.org/pdf/1606.09560"]}
119
+ {"year":"2016","title":"Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision","authors":["C Liang, J Berant, Q Le, KD Forbus, N Lao - arXiv preprint arXiv:1611.00020, 2016"],"snippet":"... All the weight matrices are initialized with a uniform distribution in [− √ 3 d , √ 3 d] where d is the input dimension. For pretrained word embeddings, we used the 300 dimension GloVe word embeddings trained on 840B common crawl corpus [? ]. ...","url":["https://arxiv.org/pdf/1611.00020"]}
120
+ {"year":"2016","title":"NewsQA: A Machine Comprehension Dataset","authors":["A Trischler, T Wang, X Yuan, J Harris, A Sordoni… - arXiv preprint arXiv: …, 2016"],"snippet":"... Both mLSTM and BARB are implemented with the Keras framework (Chollet, 2015) using the Theano (Bergstra et al., 2010) backend. Word embeddings are initialized using GloVe vectors (Pennington et al., 2014) pre-trained on the 840-billion Common Crawl corpus. ...","url":["https://arxiv.org/pdf/1611.09830"]}
121
+ {"year":"2016","title":"Normalized Log-Linear Interpolation of Backoff Language Models is Efficient","authors":["K Heafield, C Geigle, S Massung, L Schwartz - Urbana"],"snippet":"Page 1. Normalized Log-Linear Interpolation of Backoff Language Models is Efficient Kenneth Heafield University of Edinburgh 10 Crichton Street Edinburgh EH8 9AB United Kingdom [email protected] Chase Geigle Sean ...","url":["https://kheafield.com/professional/edinburgh/interpolate_paper.pdf"]}
122
+ {"year":"2016","title":"NRC Russian-English Machine Translation System for WMT 2016","authors":["C Lo, C Cherry, G Foster, D Stewart, R Islam… - Proceedings of the First …, 2016"],"snippet":"... They include the CommonCrawl corpus, the NewsCommentary v11 corpus, the Yandex corpus and the Wikipedia headlines corpus. ... Due to resource limits, we have not used the newly re- leased 3 billion sentence CommonCrawl monolingual English corpus. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2317.pdf"]}
123
+ {"year":"2016","title":"of Deliverable: Multimedia Linking and Mining","authors":["K Andreadou, S Papadopoulos, M Zampoglou… - 2016"],"snippet":"Page 1. D3.2 – Multimedia Linking and Mining Version: v1.4 – Final, Date: 02/03/2016 PROJECT TITLE: REVEAL CONTRACT NO. FP7-610928 PROJECT COORDINATOR: INTRASOFT INTERNATIONAL SA WWW.REVEALPROJECT.EU PAGE 1 OF 139 REVEAL FP7-610928 ...","url":["http://revealproject.eu/wp-content/uploads/D3.2Multimedia-Linking-and-Mining.pdf"]}
124
+ {"year":"2016","title":"On Approximately Searching for Similar Word Embeddings","authors":["K Sugawara, H Kobayashi, M Iwasaki"],"snippet":"... GV 300-dimensional embeddings (Pennington et al., 2014a) learned by the global vectors for word representation (GloVe) model (Pennington et al., 2014b) using Common Crawl corpora, which contain about 2 million words and 42 billion tokens. ...","url":["http://www.aclweb.org/anthology/P/P16/P16-1214.pdf"]}
125
+ {"year":"2016","title":"On Bias-free Crawling and Representative Web Corpora","authors":["R Schäfer, H Allee - ACL 2016, 2016"],"snippet":"... Language Resources and Evaluation. Online first: DOI 10.1007/s10579-016-9359-2. Roland Schäfer. 2016b. CommonCOW: Massively huge web corpora from CommonCrawl data and a method to distribute them freely under restrictive EU copyright laws. ...","url":["http://iiegn.eu/assets/outputs/WAC-X:2016.pdf#page=81"]}
126
+ {"year":"2016","title":"On the Ubiquity of Web Tracking: Insights from a Billion-Page Web Crawl","authors":["S Schelter, J Kunegis - arXiv preprint arXiv:1607.07403, 2016"],"snippet":"... We extract third-party embeddings from more than 3.5 billion web pages of the CommonCrawl 2012 corpus, and aggregate those to a dataset containing more than 140 million third-party embeddings in over 41 million domains. ...","url":["http://arxiv.org/pdf/1607.07403"]}
127
+ {"year":"2016","title":"Online tracking: A 1-million-site measurement and analysis","authors":["S Englehardt, A Narayanan - 2016"],"snippet":"... AdFisher builds on similar technologies as OpenWPM (Selenium, xvfb), but is not intended for tracking measurements. Common Crawl4 uses an Apache Nutch based crawler. The Common Crawl dataset is the largest publicly available web crawl5, with billions of page visits. ...","url":["http://senglehardt.com/papers/ccs16_online_tracking.pdf"]}
128
+ {"year":"2016","title":"Optimizing Interactive Development of Data-Intensive Applications","authors":["M Interlandi, SD Tetali, MA Gulzar, J Noor, T Condie… - Proceedings of the Seventh …, 2016"],"snippet":"... 1. 311 service requests dataset. https://data.cityofnewyork.us/Social-Services/311-ServiceRequests-from-2010-to-Present/erm2-nwe9. 2. Common crawl dataset. http://commoncrawl.org. 3. Hadoop. http://hadoop.apache.org. 4. Spark. http://spark.apache.org. 5. WikiReverse. ...","url":["http://dl.acm.org/citation.cfm?id=2987565"]}
129
+ {"year":"2016","title":"Paragraph Vector for Data Selection in Statistical Machine Translation","authors":["MS Duma, W Menzel"],"snippet":"... As general domain data we chose the Commoncrawl corpus1 as it is a relatively large corpus and contains crawled data from a variety of domains as well as texts having different discourse types 1http://commoncrawl.org/ (including spoken discourse). ...","url":["https://www.linguistics.rub.de/konvens16/pub/11_konvensproc.pdf"]}
130
+ {"year":"2016","title":"Parallel Graph Processing on Modern Multi-Core Servers: New Findings and Remaining Challenges","authors":["A Eisenman, L Cherkasova, G Magalhaes, Q Cai…"],"snippet":"Page 1. Parallel Graph Processing on Modern Multi-Core Servers: New Findings and Remaining Challenges Assaf Eisenman1,2, Ludmila Cherkasova2, Guilherme Magalhaes3, Qiong Cai2, Sachin Katti1 1Stanford University ...","url":["http://www.labs.hpe.com/people/lucy_cherkasova/papers/main-mascots16.pdf"]}
131
+ {"year":"2016","title":"ParFDA for Instance Selection for Statistical Machine Translation","authors":["E Biçici - Proceedings of the First Conference on Machine …, 2016"],"snippet":"... Compared with last year, this year we do not use Common Crawl parallel corpus except for en-ru. We use Common Crawl monolingual corpus fi, ro, and tr datasets and we extended the LM corpora with previous years' corpora. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2306.pdf"]}
132
+ {"year":"2016","title":"Partitioning Trillion-edge Graphs in Minutes","authors":["GM Slota, S Rajamanickam, K Devine, K Madduri - arXiv preprint arXiv:1610.07220, 2016"],"snippet":"Page 1. Partitioning Trillion-edge Graphs in Minutes George M. Slota Computer Science Department Rensselaer Polytechnic Institute Troy, NY [email protected] Sivasankaran Rajamanickam & Karen Devine Scalable Algorithms ...","url":["https://arxiv.org/pdf/1610.07220"]}
133
+ {"year":"2016","title":"Performance Optimization Techniques and Tools for Distributed Graph Processing","authors":["V Kalavri - 2016"],"snippet":"Page 1. Performance Optimization Techniques and Tools for Distributed Graph Processing VASILIKI KALAVRI School of Information and Communication Technology KTH Royal Institute of Technology Stockholm, Sweden 2016 ...","url":["http://www.diva-portal.org/smash/get/diva2:968786/FULLTEXT02"]}
134
+ {"year":"2016","title":"Phishing Classification using Lexical and Statistical Frequencies of URLs","authors":["S Villegas, AC Bahnsen, J Vargas"],"snippet":"... We used a sample of 1.2 million phishing URLs extracted from Phishtank and 1.2 million ham URLs from the CommonCrawl corpus to train the model. Classification based on URLs facilitates a defense against all phishing attacks due to the feature they all share, a URL. ...","url":["http://albahnsen.com/files/Phishing%20Classification%20using%20Lexical%20and%20Statistical%20Frequencies%20of%20URLs.pdf"]}
135
+ {"year":"2016","title":"Phrase-based Machine Translation is State-of-the-Art for Automatic Grammatical Error Correction","authors":["M Junczys-Dowmunt, R Grundkiewicz - arXiv preprint arXiv:1605.06353, 2016"],"snippet":"... Their method relies on a character-level encoder-decoder recurrent neural network with an attention mechanism. They use data from the public Lang-8 corpus and combine their model with an n-gram language model trained on web-scale Common Crawl data. ...","url":["http://arxiv.org/pdf/1605.06353"]}
136
+ {"year":"2016","title":"Phrase-Based SMT for Finnish with More Data, Better Models and Alternative Alignment and Translation Tools","authors":["J Tiedemann, F Cap, J Kanerva, F Ginter, S Stymne… - Proceedings of the First …, 2016"],"snippet":"... The English language model based on the provided Common Crawl data is limited to trigrams. ... The data is obtained from a large-scale Internet crawl, seeded from all Finnish pages in CommonCrawl.3 However, actual CommonCrawl data is only a small fraction of the total ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2326.pdf"]}
137
+ {"year":"2016","title":"PJAIT Systems for the WMT 2016","authors":["K Wołk, K Marasek - Proceedings of the First Conference on Machine …, 2016"],"snippet":"... “BASE” in the tables represents the baseline SMT system. “EXT” indicates results for the baseline system, using the baseline settings but extended with additional permissible data (limited to parallel Europarl v7, Common Crawl, ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2328.pdf"]}
138
+ {"year":"2016","title":"Porting an Open Information Extraction System from English to German","authors":["T Falke, G Stanovsky, I Gurevych, I Dagan"],"snippet":"... For this purpose, we created a new dataset consisting of 300 German sentences, randomly sampled from three sources of different genres: news articles from TIGER (Brants et al., 2004), German web pages from CommonCrawl (Habernal et al., 2016) and featured Wikipedia ...","url":["https://www.ukp.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/publikationen/2016/EMNLP_2016_PropsDE_cr.pdf"]}
139
+ {"year":"2016","title":"Practical Variable Length Gap Pattern Matching","authors":["J Bader, S Gog, M Petri - Experimental Algorithms, 2016"],"snippet":"... implemented on top of SDSL [7] data structures. We use three datasets from different application domains: The CC data set is a \\(371 \\,\\mathrm{GiB}\\) prefix of a recent \\(145 \\,\\mathrm{TiB}\\) web crawl from commoncrawl.​org. ...","url":["http://link.springer.com/chapter/10.1007/978-3-319-38851-9_1"]}
140
+ {"year":"2016","title":"Pre-Translation for Neural Machine Translation","authors":["J Niehues, E Cho, TL Ha, A Waibel - arXiv preprint arXiv:1610.05243, 2016"],"snippet":"... The systems were trained on all parallel data available for the WMT 20161. The news commentary corpus, the European parliament proceedings and the common crawl corpus sum up to 3.7M sentences and around 90M words. ...","url":["https://arxiv.org/pdf/1610.05243"]}
141
+ {"year":"2016","title":"Predicting Motivations of Actions by Leveraging Text","authors":["C Vondrick, D Oktay, H Pirsiavash, A Torralba - … of the IEEE Conference on Computer …, 2016"],"snippet":"... In ECCV. 2012. [5] C. Buck, K. Heafield, and B. van Ooyen. N-gram counts and language models from the common crawl. LREC, 2014. [6] X. Chen, A. Shrivastava, and A. Gupta. Neil: Extracting visual knowledge from web data. In ICCV, 2013. 3004 Page 9. ...","url":["http://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Vondrick_Predicting_Motivations_of_CVPR_2016_paper.html"]}
142
+ {"year":"2016","title":"Privacy issues in online machine translation services–European perspective","authors":["P Kamocki, J O'Regan - 2016"],"snippet":"... 1/11/2014. Retrieved from http://itre.cis.upenn.edu/~myl/languagelog/archives/005 492.html Smith, JR et al. (2013). Dirt cheap web-scale parallel text from the Common Crawl. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. ...","url":["https://ids-pub.bsz-bw.de/files/5043/Kamocki-ORegan_Privacy_issues_in_online_machine_translation_2016.pdf"]}
143
+ {"year":"2016","title":"Query Answering to IQ Test Questions Using Word Embedding","authors":["M Frąckowiak, J Dutkiewicz, C Jędrzejek, M Retinger… - Multimedia and Network …, 2017"],"snippet":"... The pre-trained model based on Google News [16]. Embedding vector size 300, the negative sampling count as 3. 8. Glove Small. Pre-trained model based on Glove approach [20] using common crawl data, accessible on [15]. Embedding vector size 300. 9. Glove Large. ...","url":["http://link.springer.com/chapter/10.1007/978-3-319-43982-2_25"]}
144
+ {"year":"2016","title":"Query Expansion with Locally-Trained Word Embeddings","authors":["F Diaz, B Mitra, N Craswell - arXiv preprint arXiv:1605.07891, 2016"],"snippet":"... the entire corpus. Instead of training a global embedding on the large web collection, we use a GloVe embedding trained on Common Crawl data.4 We train local embeddings using one of three retrieval sources. First, we consider ...","url":["http://arxiv.org/pdf/1605.07891"]}
145
+ {"year":"2016","title":"Real-Time Presentation Tracking Using Semantic Keyword Spotting","authors":["R Asadi, HJ Fell, T Bickmore, H Trinh"],"snippet":"... gathered from a large corpus. We use a pre-trained vector representation with 1.9 million uncased words and vectors with 300 elements. It was trained using 42 billion tokens of web data from Common Crawl. We will use both the ...","url":["http://relationalagents.com/publications/Interspeech2016.pdf"]}
146
+ {"year":"2016","title":"Recurrent versus Recursive Approaches Towards Compositionality in Semantic Vector Spaces","authors":["A Nayebi, H Blundell"],"snippet":"... 0.01, and 1They were in fact trained on 840 billion tokens of Common Crawl data, as in http://nlp.stanford.edu/ projects/glove/. Adagrad (with the default learning rate of 0.01) as our optimizer, with a minibatch size of 300. Addi ...","url":["http://web.stanford.edu/~anayebi/projects/CS_224U_Final_Project_Writeup.pdf"]}
147
+ {"year":"2016","title":"Relatedness","authors":["C Barrière - Natural Language Understanding in a Semantic Web …, 2016"],"snippet":"... GloVe has some datasets trained on Wikipedia 2014 + Gigaword 5 (large news corpus) for a total of 6 billion tokens, covering a 400K vocabulary. It has other datasets based on an even larger corpus, the Common Crawl. To ...","url":["http://link.springer.com/chapter/10.1007/978-3-319-41337-2_10"]}
148
+ {"year":"2016","title":"Reordering space design in statistical machine translation","authors":["N Pécheux, A Allauzen, J Niehues, F Yvon - Language Resources and Evaluation"],"snippet":"... in (Allauzen et al. 2013), and, for English-Czech, the Europarl and CommonCrawl parallel WMT'12 corpora. For each task, a 4-gram language model is estimated using the target side of the training data. We use Ncode with ...","url":["http://link.springer.com/article/10.1007/s10579-016-9353-8"]}
149
+ {"year":"2016","title":"Richer Interpolative Smoothing Based on Modified Kneser-Ney Language Modeling","authors":["E Shareghi, T Cohn, G Haffari"],"snippet":"... Interdependency of m, data size, and discounts To explore the correlation between these factors we selected the German and investigated this correlation on two different training data sizes: Europarl (61M words), and CommonCrawl 2014 (984M words). ...","url":["http://people.eng.unimelb.edu.au/tcohn/papers/shareghi16emnlp.pdf"]}
150
+ {"year":"2016","title":"Scaling Up Word Clustering","authors":["J Dehdari, L Tan, J van Genabith"],"snippet":"... The parallel data comes from the WMT-2015 Common Crawl Corpus, News Commentary, Yandex 1M Corpus, and the Wiki Headlines Corpus.7 The monolingual data consists of 2007– 2014 News Commentary and News Crawl articles. ...","url":["http://anthology.aclweb.org/N/N16/N16-3009.pdf"]}
151
+ {"year":"2016","title":"Selecting Domain-Specific Concepts for Question Generation With Lightly-Supervised Methods","authors":["Y Jin, PTV Le"],"snippet":"... 3 Datasets We make use of two datasets obtained from the In- ternet. One is 200k company profiles from CrunchBase. Another is 57k common crawl business news articles. We refer to these two corpora as “Company Profile Corpus” and “News Corpus”. ...","url":["https://www.researchgate.net/profile/Yiping_Jin2/publication/304751113_Selecting_Domain-Specific_Concepts_for_Question_Generation_With_Lightly-Supervised_Methods/links/57cfc28208ae057987ac127c.pdf"]}
152
+ {"year":"2016","title":"Semantic Snippets via Query-Biased Ranking of Linked Data Entities","authors":["M Alsarem - 2016"],"snippet":"Page 1. Semantic Snippets via Query-Biased Ranking of Linked Data Entities Mazen Alsarem To cite this version: Mazen Alsarem. Semantic Snippets via Query-Biased Ranking of Linked Data Entities. In- formation Retrieval [cs.IR]. ...","url":["https://hal.archives-ouvertes.fr/tel-01327769/document"]}
153
+ {"year":"2016","title":"Semantic word embedding neural network language models for automatic speech recognition","authors":["K Audhkhasi, A Sethy, B Ramabhadran - 2016 IEEE International Conference on …, 2016"],"snippet":"... The Gigaword corpus was a suitable choice because of its focus on news domain data in- stead of generic data sets such as Wikipedia or Common Crawl. We used a symmetric window size of 10 words for constructing the word co-occurrence matrix. ...","url":["http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=7472828"]}
154
+ {"year":"2016","title":"Semantics derived automatically from language corpora necessarily contain human biases","authors":["A Caliskan-Islam, JJ Bryson, A Narayanan - arXiv preprint arXiv:1608.07187, 2016"],"snippet":"... Page 9. GloVe authors provide trained embeddings, which is a “Common Crawl” corpus obtained from a large-scale crawl of the web, containing 840 billion tokens (roughly, words). Tokens in this corpus are case-sensitive and ...","url":["http://arxiv.org/pdf/1608.07187"]}
155
+ {"year":"2016","title":"Session: P44-Corpus Creation and Querying (1)","authors":["MK Bingel, P Banski, A Witt, C Data, F Lefevre, M Diab…"],"snippet":"... 960 Roland Schäfer CommonCOW: Massively Huge Web Corpora from CommonCrawl Data and a Method to Distribute them Freely under Restrictive EU Copyright Laws 990 Ioannis Manousos Katakis, Georgios Petasis and Vangelis Karkaletsis ...","url":["https://pdfs.semanticscholar.org/f62d/6d9b67532ccd66915481b7cb4047ba03a1f2.pdf"]}
156
+ {"year":"2016","title":"Shared Task on Quality Assessment for Text Simplification","authors":["S Štajner, M Popovic, H Saggion, L Specia, M Fishel - Training"],"snippet":"... The parameters for the ensemble were obtained using particle swarm optimisation under multiple cross-validation scenarios. 2. Treelstm – The metric uses GloVe word vectors8 trained on the Common Crawl corpus and dependency parse trees. ...","url":["https://www.researchgate.net/profile/Maja_Popovic7/publication/301229567_Shared_Task_on_Quality_Assessment_for_Text_Simplification/links/570e179e08ae3199889d4eb5.pdf"]}
157
+ {"year":"2016","title":"Sheffield Systems for the English-Romanian Translation Task","authors":["F Blain, X Song, L Specia"],"snippet":"... For the two last, we use subsets of both the News Commentary (93%) and the Common Crawl (13%), selected using XenC- v2.12 (Rousseau, 2013) in mode 23 with the parallel corpora (Europarl7, SETimes2) as in-domain data. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2307.pdf"]}
158
+ {"year":"2016","title":"SoK: Applying Machine Learning in Security-A Survey","authors":["H Jiang, J Nagra, P Ahammad - arXiv preprint arXiv:1611.03186, 2016"],"snippet":"Page 1. SoK: Applying Machine Learning in Security - A Survey Heju Jiang* , Jasvir Nagra, Parvez Ahammad ∗ Instart Logic, Inc. {hjiang, jnagra, pahammad }@instartlogic.com ABSTRACT The idea of applying machine learning ...","url":["https://arxiv.org/pdf/1611.03186"]}
159
+ {"year":"2016","title":"Source Sentence Simplification for Statistical Machine Translation","authors":["E Hasler, A de Gispert, F Stahlberg, A Waite, B Byrne - Computer Speech & Language, 2016"],"snippet":"... translation lattices. We trained an English-German system on the WMT 2015 training data (Bojar et al., 2015) comprising 4.2M parallel sentences from the Europarl, News Commentary v10 and Commoncrawl corpora. We word ...","url":["http://www.sciencedirect.com/science/article/pii/S0885230816301711"]}
160
+ {"year":"2016","title":"Syntactically Guided Neural Machine Translation","authors":["F Stahlberg, E Hasler, A Waite, B Byrne - arXiv preprint arXiv:1605.04569, 2016"],"snippet":"... The En-De training set includes Europarl v7, Common Crawl, and News Commentary v10. Sentence pairs with sentences longer than 80 words or length ratios exceeding 2.4:1 were deleted, as were Common Crawl sentences from other languages (Shuyo, 2010). ...","url":["http://arxiv.org/pdf/1605.04569"]}
161
+ {"year":"2016","title":"SYSTEMS AND METHODS FOR SPEECH TRANSCRIPTION","authors":["A Hannun, C Case, J Casper, B Catanzaro, G Diamos… - US Patent 20,160,171,974, 2016"],"snippet":"... the decoding. The language model was trained on 220 million phrases of the Common Crawl (available at commoncrawl.org), selected such that at least 95% of the characters of each phrase were in the alphabet. Only the most ...","url":["http://www.freepatentsonline.com/y2016/0171974.html"]}
162
+ {"year":"2016","title":"TAIPAN: Automatic Property Mapping for Tabular Data","authors":["I Ermilov, ACN Ngomo"],"snippet":"... RAM. Gold Standard We aimed to use T2D entity-level Gold Standard (T2D), a reference dataset which consists of 1 748 tables and reflects the actual distribution of the data in the Common Crawl,5 to evaluate our algorithms. ...","url":["http://svn.aksw.org/papers/2016/EKAW_Taipan/public.pdf"]}
163
+ {"year":"2016","title":"Target-Side Context for Discriminative Models in Statistical Machine Translation","authors":["A Tamchyna, A Fraser, O Bojar, M Junczys-Dowmunt - arXiv preprint arXiv: …, 2016"],"snippet":"... Our English-German system is trained on the data available for the WMT14 translation task: Europarl (Koehn, 2005) and the Common Crawl corpus,3 roughly 4.3 million sentence pairs altogether. We tune the system on the WMT13 test set and we test on the WMT14 set. ...","url":["http://arxiv.org/pdf/1607.01149"]}
164
+ {"year":"2016","title":"TAXI at SemEval-2016 Task 13: a Taxonomy Induction Method based on Lexico-Syntactic Patterns, Substrings and Focused Crawling","authors":["A Panchenko, S Faralli, E Ruppert, S Remus, H Naets…"],"snippet":"... 59G 59.2 – – – CommonCrawl 168000.0 ‡ – – – FocusedCrawl Food 22.8 7.9 3.4 3.6 ... WebISA. In addition to PattaMaika and PatternSim, we used a publicly available database of English hypernym relations extracted from the CommonCrawl corpus (Seitner et al., 2016). ...","url":["http://web.informatik.uni-mannheim.de/ponzetto/pubs/panchenko16.pdf"]}
165
+ {"year":"2016","title":"Temporal Attention-Gated Model for Robust Sequence Classification","authors":["W Pei, T Baltrušaitis, DMJ Tax, LP Morency - arXiv preprint arXiv:1612.00385, 2016"],"snippet":"... 4.2.2 Experimental Setup We utilize 300-d Glove word vectors pretrained over the Common Crawl [27] as the features for each word of the sentences. Our model is well suitable to perform sentiment analysis using sentence-level labels. ...","url":["https://arxiv.org/pdf/1612.00385"]}
166
+ {"year":"2016","title":"The 2016 KIT IWSLT Speech-to-Text Systems for English and German","authors":["TS Nguyen, M Müller, M Sperber, T Zenkel, K Kilgour…"],"snippet":"... Page 4. Text corpus # Words TED 3.6m Fisher 10.4m Switchboard 1.4m TEDLIUM dataselection 155m News + News-commentary + -crawl 4,478m Commoncrawl 185m GIGA 2323m Table 3: English language modeling data. Text corpus # Words ...","url":["http://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_24.pdf"]}
167
+ {"year":"2016","title":"The AFRL-MITLL WMT16 News-Translation Task Systems","authors":["J Gwinnup, T Anderson, G Erdmann, K Young, M Kazi… - Proceedings of the First …, 2016"],"snippet":"... to build a monolithic language model from the following sources: Yandex4, Commoncrawl (Smith et al., 2013), LDC Gigaword English v5 (Parker et al., 2011) and News Commentary. Submission system 1 included the data selected from the large Commoncrawl corpus as ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2313.pdf"]}
168
+ {"year":"2016","title":"The CogALex-V Shared Task on the Corpus-Based Identification of Semantic Relations","authors":["E Santus, A Gladkova, S Evert, A Lenci - COLING 2016, 2016"],"snippet":"... Team Method (s) Corpus size Corpus GHHH Word analogies, linear regression and multi-task CNN 100B 6B 840B Google News (pre-trained word2vec embeddings, 300 dim.); Wikipedia+ Gigaword 5 (pre-trained GloVe embeddings, 300 dim.), Common Crawl (pre-trained ...","url":["https://sites.google.com/site/cogalex2016/home/accepted-papers/CogALex-V_Proceedings.pdf#page=83"]}
169
+ {"year":"2016","title":"The Edinburgh/LMU Hierarchical Machine Translation System for WMT 2016","authors":["M Huck, A Fraser, B Haddow - Proc. of the ACL 2016 First Conf. on Machine …, 2016"],"snippet":"... CommonCrawl LM training data in background LM ... Utilizing a larger amount of target-side monolingual resources by appending the CommonCrawl corpus to the background LM's training data is very beneficial and increases the BLEU scores by around one point. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2315.pdf"]}
170
+ {"year":"2016","title":"The Edit Distance Transducer in Action: The University of Cambridge English-German System at WMT16","authors":["F Stahlberg, E Hasler, B Byrne - arXiv preprint arXiv:1606.04963, 2016","FSEHB Byrne"],"snippet":"This paper presents the University of Cambridge submission to WMT16. Motivated by the complementary nature of syntactical machine translation and neural machine translation (NMT), we exploit the synergies of Hiero and NMT in different …","url":["http://arxiv.org/pdf/1606.04963","https://ar5iv.labs.arxiv.org/html/1606.04963"]}
171
+ {"year":"2016","title":"The ILSP/ARC submission to the WMT 2016 Bilingual Document Alignment Shared Task","authors":["V Papavassiliou, P Prokopidis, S Piperidis - Proceedings of the First Conference on …, 2016"],"snippet":"... 1http://commoncrawl.org/ 2http://nlp.ilsp.gr/redmine/ilsp-fc/ 3Including modules for metadata extraction, language identification, boilerplate removal, document clean-up, text classification and sentence alignment 733 ... Dirt cheap web-scale parallel text from the common crawl...","url":["http://www.aclweb.org/anthology/W/W16/W16-2375.pdf"]}
172
+ {"year":"2016","title":"The JHU Machine Translation Systems for WMT 2016","authors":["S Ding, K Duh, H Khayrallah, P Koehn, M Post - … of the First Conference on Machine …, 2016"],"snippet":"... In addition, we included a large language model based on the CommonCrawl monolingual data ... of the language model trained on the monomlingual corpora extracted from Common Crawl... year, large corpora of monolingual data were extracted from Common Crawl (Buck et ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2310.pdf"]}
173
+ {"year":"2016","title":"The Karlsruhe Institute of Technology Systems for the News Translation Task in WMT 2016","authors":["TL Ha, E Cho, J Niehues, M Mediani, M Sperber… - Proceedings of the First …, 2016"],"snippet":"... To im- prove the quality of the Common Crawl corpus be- ing used in training, we filtered out noisy sentence pairs using an SVM classifier as described in (Me- diani et al., 2011). All of our translation systems are basically phrase-based. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2314.pdf"]}
174
+ {"year":"2016","title":"The NTNU-YZU System in the AESW Shared Task: Automated Evalua-tion of Scientific Writing Using a Convolutional Neural Network","authors":["LH Lee, BL Lin, LC Yu, YH Tseng"],"snippet":"... For the GloVe representation, we adopted 4 different datasets for training the vectors including one from Wikipedia 2014 and Gigaword 5 (400K vo- cabulary), two common crawl datasets (uncased 1.9M vocabulary, and cased 2.2M vocabulary) and one Twitter dataset (1.2M ...","url":["http://anthology.aclweb.org/W/W16/W16-0513.pdf"]}
175
+ {"year":"2016","title":"The RWTH Aachen Machine Translation System for IWSLT 2016","authors":["JT Peter, A Guta, N Rossenbach, M Graça, H Ney"],"snippet":"... ich war fünf Mal dort oben . . Figure 1: An example of multiple phrasal segmentations taken from the common crawl corpus. The JTR sequence is indicated by blue arcs. The distinct phrasal segmentations are shown in red and shaded green colour. log-linear framework. ...","url":["http://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_23.pdf"]}
176
+ {"year":"2016","title":"Topics of Controversy: An Empirical Analysis of Web Censorship Lists","authors":["Z Weinberg, M Sharif, J Szurdi, N Christin - Proceedings on Privacy Enhancing …, 2017"],"snippet":"... Common Crawl Finally, this is the closest available ap- proximation to an unbiased sample of the entire Web. The Common Crawl Foundation continuously operates a large-scale Web crawl and publishes the results [27]. Each crawl contains at least a billion pages. ...","url":["https://www.andrew.cmu.edu/user/nicolasc/publications/Weinberg-PETS17.pdf"]}
177
+ {"year":"2016","title":"Toward Multilingual Neural Machine Translation with Universal Encoder and Decoder","authors":["TL Ha, J Niehues, A Waibel - arXiv preprint arXiv:1611.04798, 2016"],"snippet":"... translation and the web-crawled parallel data (CommonCrawl). ... network. We mix the TED parallel corpus and the substantial monolingual corpus (EPPS+NC+ CommonCrawl) and train a mix-source NMT system from those data. ...","url":["https://arxiv.org/pdf/1611.04798"]}
178
+ {"year":"2016","title":"Towards a Complete View of the Certificate Ecosystem","authors":["B VanderSloot, J Amann, M Bernhard, Z Durumeric… - 2016"],"snippet":"... In 36th IEEE Symposium on Security and Privacy, May 2015. [5] Certificate Transparency: Extended validation in Chrome. https://www.certificate-transparency.org/ev-ct-plan. [6] Common Crawl. https://commoncrawl.org/. [7] The DROWN attack. https://drownattack.com/. ...","url":["https://jhalderm.com/pub/papers/https-perspectives-imc16.pdf"]}
179
+ {"year":"2016","title":"Towards More Accurate Statistical Profiling of Deployed schema. org Microdata","authors":["R Meusel, D Ritze, H Paulheim - Journal of Data and Information Quality (JDIQ), 2016"],"snippet":"... Springer International Publishing. 44. Alex Stolz and Martin Hepp. 2015. Towards crawling the web for structured data: Pitfalls of common crawl for e-commerce. In Proceedings of the 6th International Workshop on Consuming Linked Data (COLD ISWC'15). ...","url":["http://dl.acm.org/citation.cfm?id=2992788"]}
180
+ {"year":"2016","title":"Translation of Unknown Words in Low Resource Languages","authors":["B Gujral, H Khayrallah, P Koehn"],"snippet":"... worthy trade-off. Word Embedding: For this technique, we collect Hindi monolingual data from Wikipedia dump (Al-Rfou et al., 2013) and Commoncrawl (Buck et al., 2014),2 with a total of about 29 million tokens. For Uzbek, the ...","url":["https://pdfs.semanticscholar.org/f130/2e20b4dabb48b8442f857426c28b205287f1.pdf"]}
181
+ {"year":"2016","title":"TripleSent: a triple store of events associated with their prototypical sentiment","authors":["V Hoste, E Lefever, S van der Waart van Gulik… - eKNOW 2016: The Eighth …, 2016"],"snippet":"... These events will be obtained by extracting patterns for highly explicit sentiment expressions (eg, “I hate” or “I love”) or from large web data crawls (eg, commoncrawl.org), which will subsequently be syntactically and semantically parsed to extract events and sentiment triples. ...","url":["https://biblio.ugent.be/publication/8071695/file/8071708"]}
182
+ {"year":"2016","title":"Undercounting File Downloads from Institutional Repositories","authors":["P OBrien, K Arlitsch, L Sterman, J Mixter, J Wheeler… - Journal of Library …, 2016"],"snippet":"Page 1. © Patrick OBrien, Kenning Arlitsch, Leila Sterman, Jeff Mixter, Jonathan Wheeler, and Susan Borda Address correspondence to Patrick OBrien, Semantic Web Research Director, Montana State University, PO Box 173320, Bozeman, MT 59717-3320, USA. ...","url":["http://scholarworks.montana.edu/xmlui/bitstream/handle/1/9943/IR-Undercounting-preprint_2016-07.pdf?sequence=3&isAllowed=y"]}
183
+ {"year":"2016","title":"User Modeling in Language Learning with Macaronic Texts","authors":["A Renduchintala, R Knowles, P Koehn, J Eisner - Proceedings of ACL, 2016"],"snippet":"... We translated each German sentence using the Moses Statistical Machine Translation (SMT) toolkit (Koehn et al., 2007). The SMT system was trained on the German-English Commoncrawl parallel text used in WMT 2015 (Bojar et al., 2015). ...","url":["https://www.cs.jhu.edu/~jason/papers/renduchintala+al.acl16-macmodel.pdf"]}
184
+ {"year":"2016","title":"Using Feedforward and Recurrent Neural Networks to Predict a Blogger's Age","authors":["T Moon, E Liu"],"snippet":"... The embedding matrix L ∈ RV ×d is initialized for d = 300 with GloVe word vectors trained on the Common Crawl data set [9]. If a token does not correspond to any pre-trained word vector, a random word vector is generated with Xavier initialization [2]. The unembedded vector ...","url":["http://cs224d.stanford.edu/reports/tym1.pdf"]}
185
+ {"year":"2016","title":"Vive la petite différence! Exploiting small differences for gender attribution of short texts","authors":["F Gralinski, R Jaworski, Ł Borchmann, P Wierzchon"],"snippet":"... The procedure of preparing the HSSS corpus was to take Common Crawl-based Web corpus1 of Polish [4] and grep for lines ... Classification with Deep Learning (2015) 4. Buck, C., Heafield, K., van Ooyen, B.: N-gram counts and language models from the common crawl...","url":["http://www.staff.amu.edu.pl/~rjawor/tsd-article.pdf"]}
186
+ {"year":"2016","title":"Vive la Petite Différence!","authors":["F Graliński, R Jaworski, Ł Borchmann, P Wierzchoń - International Conference on …, 2016"],"snippet":"... The research was conducted on the publicly available corpus called “He Said She Said”, consisting of a large number of short texts from the Polish version of Common Crawl... Keywords. Gender attribution Text classification Corpus Common Crawl Research reproducibility. ...","url":["http://link.springer.com/chapter/10.1007/978-3-319-45510-5_7"]}
187
+ {"year":"2016","title":"VoldemortKG: Mapping schema. org and Web Entities to Linked Open Data","authors":["A Tonon, V Felder, DE Difallah, P Cudré-Mauroux"],"snippet":"... that apply to the Common Crawl corpus.12 4 The VoldemortKG Knowledge Graph To demonstrate the potential of the dataset we release, we built a proof of concept knowledge graph called VoldemortKG. VoldemortKG integrates schema.org 12 http://commoncrawl.org/terms ...","url":["http://daplab.ch/wp-content/uploads/2016/08/voldemort.pdf"]}
188
+ {"year":"2016","title":"What does the Web remember of its deleted past? An archival reconstruction of the former Yugoslav top-level domain","authors":["A Ben-David - New Media & Society, 2016"],"snippet":"... The completeness of the reconstruction effort could have been aided by consulting other large repositories of temporal Web data, such as Common Crawl, or by simply contacting the Internet Archive and requesting for all domains in the .yu domain. ...","url":["http://nms.sagepub.com/content/early/2016/04/27/1461444816643790.abstract"]}
189
+ {"year":"2016","title":"What Makes Word-level Neural Machine Translation Hard: A Case Study on English-German Translation","authors":["F Hirschmann, J Nam, J Fürnkranz"],"snippet":"... 5.1 Dataset & Preprocessing Our models were trained on the data provided by the 2014 Workshop on Machine Translation (WMT). Specifically, we used the Europarl v7, Common Crawl, and News Commentary corpora. Our ...","url":["http://www.aclweb.org/anthology/C/C16/C16-1301.pdf"]}
190
+ {"year":"2016","title":"WHAT: A Big Data Approach for Accounting of Modern Web Services","authors":["M Trevisan, I Drago, M Mellia, HH Song, M Baldi - 2016"],"snippet":"... [5] D. Plonka and P. Barford, “Flexible Traffic and Host Profiling via DNS Rendezvous,” in Proc. of the SATIN, 2011, pp. 1–8. [6] “Common Crawl,” http://commoncrawl.org/. [7] A. Finamore et al., “Experiences of Internet Traffic Monitoring with Tstat,” IEEE Netw., vol. 25, no. 3, pp. ...","url":["http://www.tlc-networks.polito.it/mellia/papers/BMLIT_web_meter.pdf"]}
191
+ {"year":"2016","title":"Which argument is more convincing? Analyzing and predicting convincingness of Web arguments using bidirectional LSTM","authors":["I Habernal, I Gurevych"],"snippet":"... Memory (BLSTM) neural network for end-to-end processing.9 The input layer relies on pre-trained word embeddings, in particular GloVe (Pennington et al., 2014) trained on 840B tokens from Common Crawl;10 the embedding weights are further updated during training. ...","url":["https://www.informatik.tu-darmstadt.de/fileadmin/user_upload/Group_UKP/publikationen/2016/acl2016-convincing-arguments-camera-ready.pdf"]}
192
+ {"year":"2016","title":"Wikipedia mining of hidden links between political leaders","authors":["KM Frahm, K Jaffrès-Runser, DL Shepelyansky - arXiv preprint arXiv:1609.01948, 2016"],"snippet":"... At present directed networks of real systems can be very large (about 4.2 million articles for the English Wikipedia edition in 2013 [10] or 3.5 billion web pages for a publicly accessible web crawl that was gathered by the Common Crawl Foundation in 2012 [28]). ...","url":["http://arxiv.org/pdf/1609.01948"]}
193
+ {"year":"2016","title":"WOLVESAAR at SemEval-2016 Task 1: Replicating the Success of Monolingual Word Alignment and Neural Embeddings for Semantic Textual Similarity","authors":["H Bechara, R Gupta, L Tan, C Orasan, R Mitkov… - Proceedings of SemEval, 2016"],"snippet":"... 2We use the 300 dimensions vectors from the GloVe model trained on the Commoncrawl Corpus with 840B tokens, 2.2M vocabulary. distributions p and pθ using regularised KullbackLeibler (KL) divergence. J(θ) = 1 n n ∑ i=1KL(p(i)∣ ∣ ∣ ∣ p (i) θ ) + λ2||θ||2 2 (8) ...","url":["http://www.anthology.aclweb.org/S/S16/S16-1096.pdf"]}
194
+ {"year":"2016","title":"Word Representation on Small Background Texts","authors":["L Li, Z Jiang, Y Liu, D Huang - Chinese National Conference on Social Media …, 2016"],"snippet":"... For example, Pennington et al. (2014) used Wikipedia, Gigaword 5 and Common Crawl to learn word representations, each of which contained billions of tokens. There was not always a monotonic increase in performance as the amount of background texts increased. ...","url":["http://link.springer.com/chapter/10.1007/978-981-10-2993-6_12"]}
195
+ {"year":"2016","title":"Word2Vec vs DBnary: Augmenting METEOR using Vector Representations or Lexical Resources?","authors":["C Servan, A Berard, Z Elloumi, H Blanchon, L Besacier - arXiv preprint arXiv: …, 2016","C Servan, A Bérard, Z Elloumi, H Blanchon, L Besacier"],"snippet":"... German–English Europarl V7 + news commentary V10 2.1 M 57.2 M 59.7 M Russian–English Common Crawl + news commentary V10 + Yandex 2.0 M 47.2 M 50.3 M Table 2: Bilingual corpora used to train the word embeddings for each language pair. ...","url":["http://www.aclweb.org/anthology/C/C16/C16-1110.pdf","https://arxiv.org/pdf/1610.01291"]}
196
+ {"year":"2016","title":"Yandex School of Data Analysis approach to English-Turkish translation at WMT16 News Translation Task","authors":["A Dvorkovich, S Gubanov, I Galinskaya - Proceedings of the First Conference on …, 2016"],"snippet":"... 2.7 Data For training translation model, language models, and NMT reranker, we used only the provided constrained data (SETIMES 2 parallel TurkishEnglish corpus, and monolingual Turkish and En- glish Common Crawl corpora). ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2311.pdf"]}
2017.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
2018.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
2019.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
2020.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
2021.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
2022.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
2023.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
2024.jsonl ADDED
The diff for this file is too large to render. See raw diff