Datasets:

Modalities:
Text
Formats:
json
Libraries:
Datasets
pandas
citations / 2017.jsonl
fheilz's picture
Upload 9 files
fb2c626 verified
{"year":"2017","title":"$ k $-Nearest Neighbor Augmented Neural Networks for Text Classification","authors":["Z Wang, W Hamza, L Song - arXiv preprint arXiv:1708.07863, 2017"],"snippet":"... Table 1 shows the statistics of all the datasets. Experiment Settings We initialize word embeddings with the 300-dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus (Pennington, Socher, and Manning, 2014). ...","url":["https://arxiv.org/pdf/1708.07863"]}
{"year":"2017","title":"A Context-Aware Recurrent Encoder for Neural Machine Translation","authors":["B Zhang, D Xiong, J Su, H Duan - IEEE/ACM Transactions on Audio, Speech, and …, 2017"],"snippet":"Page 1. 2329-9290 (c) 2017 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This ...","url":["http://ieeexplore.ieee.org/abstract/document/8031316/"]}
{"year":"2017","title":"A Continuously Growing Dataset of Sentential Paraphrases","authors":["W Lan, S Qiu, H He, W Xu"],"snippet":"... We used 300-dimensional word vectors trained on Common Crawl and Twitter, summed the vectors for each sentence, and computed the cosine similarity. WMF/OrMF Weighted Matrix Factorization (WMF) (Guo and Diab, 2012) is an unsupervised latent space model. ...","url":["https://pdfs.semanticscholar.org/dfd2/bc4a55bfe59554ec1d5086e5c0f5a503c8a8.pdf"]}
{"year":"2017","title":"A Framework for Enriching Lexical Semantic Resources with Distributional Semantics","authors":["C Biemann, S Faralli, A Panchenko, SP Ponzetto - arXiv preprint arXiv:1712.08819, 2017"],"snippet":"… This makes our approach highly scalable: in recent experiments we have been accordingly able to apply our method at web scale on the CommonCrawl6, the largest existing public repository of web content … 6 https://commoncrawl.org Page 15 …","url":["https://arxiv.org/pdf/1712.08819"]}
{"year":"2017","title":"A Hybrid Framework for Text Modeling with Convolutional RNN","authors":["C Wang, F Jiang, H Yang - Proceedings of the 23rd ACM SIGKDD International …, 2017"],"snippet":"Page 1. A Hybrid Framework for Text Modeling with Convolutional RNN Chenglong Wang, Feijun Jiang, Hongxia Yang Alibaba Group 969 West Wenyi Road Hangzhou, China 310000 {chenglong.cl,feijun.jiang ,yang.yhx}@alibaba-inc.com ...","url":["http://dl.acm.org/citation.cfm?id=3098140"]}
{"year":"2017","title":"A Large Self-Annotated Corpus for Sarcasm","authors":["M Khodak, N Saunshi, K Vodrahalli - arXiv preprint arXiv:1704.05579, 2017"],"snippet":"... simple low-dimensional document representation. For word vectors we use normalized 300-dimensional GloVe representations trained on the Common Crawl corpus (Pennington et al., 2014). Since we are establishing baselines ...","url":["https://arxiv.org/pdf/1704.05579"]}
{"year":"2017","title":"A Lightweight Front-end Tool for Interactive Entity Population","authors":["H Oiwa, Y Suhara, J Komiya, A Lopatenko - arXiv preprint arXiv:1708.00481, 2017"],"snippet":"... We prepared models trained based on the CommonCrawl corpus and the Twitter corpus1. Note that the specification of the expansion algorithm is not limited to the algorithm described in this paper, as LUWAK considers the Expansion API as an external function. ...","url":["https://arxiv.org/pdf/1708.00481"]}
{"year":"2017","title":"A Memory-Augmented Neural Model for Automated Grading","authors":["S Zhao, Y Zhang, X Xiong, A Botelho, N Heffernan - 2017"],"snippet":"... networks. We used the publicly available pre-trained Glove word embeddings [23], which was trained on 42 billion tokens of web data, from Common Crawl (http://commoncrawl.org/). The dimension of each word vector is 300. ...","url":["https://siyuanzhao.github.io/pdf/L_S_2017.pdf"]}
{"year":"2017","title":"A Nested Attention Neural Hybrid Model for Grammatical Error Correction","authors":["J Ji, Q Wang, K Toutanova, Y Gong, S Truong, J Gao - arXiv preprint arXiv: …, 2017"],"snippet":"Page 1. A Nested Attention Neural Hybrid Model for Grammatical Error Correction Jianshu Ji†, Qinlong Wang†, Kristina Toutanova‡∗, Yongen Gong†, Steven Truong†, Jianfeng Gao§ †Microsoft AI & Research ‡Google Research ...","url":["https://arxiv.org/pdf/1707.02026"]}
{"year":"2017","title":"A Neural Approach to Source Dependency-Based Context Model for Statistical Machine Translation","authors":["K Chen, T Zhao, M Yang, L Liu, A Tamura, R Wang… - IEEE/ACM Transactions on …, 2017"],"snippet":"Page 1. 2329-9290 (c) 2017 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["http://ieeexplore.ieee.org/abstract/document/8105847/"]}
{"year":"2017","title":"A Neural Chatbot with Personality","authors":["H Nguyen, D Morales, T Chin"],"snippet":"... We also experiment with initializing embeddings for en- coder vocabulary and decoder vocabulary using GLoVe 300d Common Crawl [8]. To ensure that our model was implemented correctly, we trained it on a subset of data (3,000 pairs) and saw that the loss converged to 0 ...","url":["http://web.stanford.edu/class/cs224n/reports/2761115.pdf"]}
{"year":"2017","title":"A Query Log Analysis of Dataset Search","authors":["E Kacprzak, LM Koesten, LD Ibánez, E Simperl… - … 2017, Rome, Italy, June 5-8, …, 2017"],"snippet":"... Cafarella estimates more than one billion sources of data on the web as of February 2011, counting structured data extracted from Web pages [3]; and The Web Data Commons project recently extracted 233 million data tables from the Common Crawl [12]. ...","url":["http://books.google.de/books?hl=en&lr=lang_en&id=kiYmDwAAQBAJ&oi=fnd&pg=PA429&dq=commoncrawl&ots=a4pucnNpg8&sig=rhbU3CaLPahnItWRuhUlAHj8J9E"]}
{"year":"2017","title":"A Semi-supervised Framework for Image Captioning","authors":["W Chen, A Lucchi, T Hofmann"],"snippet":"... we use both the 2008-2010 News-CommonCrawl and Eu- roparl corpus 3 as out-of-domain training data. Combined, these two datasets contain ∼ 3M sentences, from which we removed sentences shorter than 7 words or longer than 30 words. ...","url":["https://pdfs.semanticscholar.org/4fa6/a688f350831503d158f8f618c58d1e06bc5d.pdf"]}
{"year":"2017","title":"A Shared Task on Bandit Learning for Machine Translation","authors":["A Sokolov, J Kreutzer, K Sunderland, P Danchenko"],"snippet":"... Data. For training initial or seed MT systems (the input to Algorithm 1), out-of-domain parallel data was restricted to DE-EN parts of Europarl v7, NewsCommentary v12, CommonCrawl and Rapid data from the WMT 2017 News Translation (constrained) task4. Furthermore ...","url":["http://www.cl.uni-heidelberg.de/~riezler/publications/papers/WMT2017.pdf"]}
{"year":"2017","title":"A simple sequence attention model for machine comprehension","authors":["M Hasegawa"],"snippet":"... The data was tokenized with a basic tokenizer and the Stanford GloVe [1] was used as trained word embeddings. To ensure a larger coverage among the extracted tokens the larger ”Common Crawl” with 840B tokens, 2.2M vocab and 300d vectors embeddings were used. ...","url":["https://web.stanford.edu/class/cs224n/reports/2761077.pdf"]}
{"year":"2017","title":"A Survey of Domain Adaptation for Statistical Machine Translation","authors":["H Cuong, K Sima'an"],"snippet":"... Surprisingly, degradation of translation quality is observed even when we train an MT system on large heterogeneous corpora (eg EuroParl, Common Crawl Corpus, UN Corpus, News Commentary) (Shah et al, 2012; Carpuat et al, 2014; Cuong et al, 2016). ...","url":["https://staff.fnwi.uva.nl/c.hoang/mt20172.pdf"]}
{"year":"2017","title":"A Teacher-Student Framework for Zero-Resource Neural Machine Translation","authors":["Y Chen, Y Liu, Y Cheng, VOK Li"],"snippet":"... For the WMT corpus, we evaluate our approach on a Spanish-French (Es-Fr) translation task with a zero-resource setting. We combine the following corpora to form the Es-En and En-Fr parallel corpora: Common Crawl, News Commentary, Europarl v7 and UN. ...","url":["http://nlp.csai.tsinghua.edu.cn/~ly/papers/acl2017_cy.pdf"]}
{"year":"2017","title":"A Unified Query-based Generative Model for Question Generation and Question Answering","authors":["L Song, Z Wang, W Hamza - arXiv preprint arXiv:1709.01058, 2017"],"snippet":"... The encoder and decoder share the same pre-trained word embeddings, which are the 300-dimensional GolVe (Pennington, Socher, and Manning, 2014) word vectors pre-trained from the 840B common crawl corpus, and the embeddings are not updated during training. ...","url":["https://arxiv.org/pdf/1709.01058"]}
{"year":"2017","title":"A Web Corpus for eCare: Collection, Annotation and Learning-Preliminary Results-DRAFT: 20 March 2017","authors":["M Santini, M Alirezai, M Nyström, A Jönsson"],"snippet":"... web, neither within the ”web as a corpus” experience, nor within the ”wacky” initiative, nor with Common Crawl corpus7. 5 See https://en.wikipedia.org/wiki/Fair_use 6 See https://www.jisc.ac. uk/guides/text-and-data-mining-copyright-exception 7 See http://commoncrawl.org/the ...","url":["https://www.researchgate.net/profile/Marina_Santini/publication/315390867_A_Web_Corpus_for_eCare_Collection_Annotation_and_Learning_-_Preliminary_Results_-/links/58cfb829458515b6ed8c1527/A-Web-Corpus-for-eCare-Collection-Annotation-and-Learning-Preliminary-Results.pdf"]}
{"year":"2017","title":"A Web Corpus for eCare: Collection, Lay Annotation and Learning-First Results","authors":["M Santini, A Jönsson, M Nyström, M Alirezai"],"snippet":"... from the web, neither within the \"web as a corpus\" experience, nor within the \"wacky\" initiative, nor with Common Crawl corpus9. ... for human language technology: introducing an LRE special section\" Lang Resources & Evaluation 2017 51 9See http://commoncrawl.org/the ...","url":["https://www.researchgate.net/profile/Marina_Santini/publication/318379265_A_Web_Corpus_for_eCare_Collection_Lay_Annotation_and_Learning_-First_Results-/links/596650de0f7e9b80917fea3e/A-Web-Corpus-for-eCare-Collection-Lay-Annotation-and-Learning-First-Results.pdf"]}
{"year":"2017","title":"A Web Page Distillation Strategy for Efficient Focused Crawling Based on Optimized Naïve Bayes (ONB) Classifier","authors":["AI Saleh, AE Abulwafa, MF Al Rahmawy - Applied Soft Computing, 2017"],"snippet":"The target of a focused crawler (FC) is to retrieve pages related to a specific domain of interest (DOI). However, FCs may be hasted if bad links were injected.","url":["http://www.sciencedirect.com/science/article/pii/S1568494616306536"]}
{"year":"2017","title":"Abstract Meaning Representation Parsing using LSTM Recurrent Neural Networks","authors":["M Hutchinson","W Foland, JH Martin - Proceedings of the 55th Annual Meeting of the …, 2017"],"snippet":"... The use of distributed word representations generated from large text corpora is pervasive in modern NLP. We start with 300 dimension GloVe representations (Pennington et al., 2014) trained on the 840 billion word common crawl (Smith et al., 2013). ...","url":["http://www.aclweb.org/anthology/P17-1043","https://zdoc.pub/abstract-meaning-representation-parsing-using-lstm-recurrent.html"]}
{"year":"2017","title":"Accelerating Innovation Through Analogy Mining","authors":["T Hope, J Chan, A Kittur, D Shahaf - arXiv preprint arXiv:1706.05585, 2017"],"snippet":"... In more formal terms, let wi = (w1 i ,w2 i ,...,wT i) be the sequence of GloVe [27] word vectors (pre-trained on Common Crawl web data), representing (x1 i ,x2 i ,...,xT i ). We select all xi word vectors for which ˜p j ik = 1(˜m j ik = 1) for some k, and concatenate them into one ...","url":["https://arxiv.org/pdf/1706.05585"]}
{"year":"2017","title":"Accurate Sentence Matching with Hybrid Siamese Networks","authors":["M Nicosia, A Moschitti - Proceedings of the 2017 ACM on Conference on …, 2017"],"snippet":"Their training split contains 384,348 pairs, and the balanced development and test sets contain 10,000 pairs each. The embeddings are a subset of the 300-dimensional GloVe word vectors pretrained on the Common Crawl corpus, 3 covering the Quora dataset vocabulary …","url":["http://dl.acm.org/citation.cfm?id=3133156"]}
{"year":"2017","title":"Acquiring Common Sense Spatial Knowledge through Implicit Spatial Templates","authors":["G Collell, L Van Gool, MF Moens - arXiv preprint arXiv:1711.06821, 2017"],"snippet":"4.5 Word embeddings We use 300-dimensional GloVe word embeddings (Pennington, Socher, and Manning 2014) pre-trained on the Common Crawl corpus (consisting of 840B-tokens), which we obtain from the authors' website.8 …","url":["https://arxiv.org/pdf/1711.06821"]}
{"year":"2017","title":"Adaptation and Combination of NMT Systems: The KIT Translation Systems for IWSLT 2016","authors":["E Cho, J Niehues, TL Ha, M Sperber, M Mediani… - Proceedings of the 13th …, 2016"],"snippet":"... to 1.0. We use a beam search for decoding, with the beam size of 12. The baseline systems were trained on the WMT parallel data. For both languages, this consists of the EPPS, NC, CommonCrawl corpus. In addition, we ...","url":["http://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_17.pdf"]}
{"year":"2017","title":"Adapting Sequence Models for Sentence Correction","authors":["A Schmaltz, Y Kim, AM Rush, SM Shieber - arXiv preprint arXiv:1707.09067, 2017"],"snippet":"... provided access to SRILM (Stolcke, 2002) for running Junczys-Dowmunt and Grundkiewicz (2016) 7We found that including the features and data associated with the large language models of Junczys-Dowmunt and Grundkiewicz (2016), created from Common Crawl text ...","url":["https://arxiv.org/pdf/1707.09067"]}
{"year":"2017","title":"Adversarial Training for Cross-Domain Universal Dependency Parsing","authors":["M Sato, H Manabe, H Noji, Y Matsumoto"],"snippet":"... initialized POS tag embeddings. For the model 2The pre-trained word embeddings are provided by the CoNLL 2017 Shared Task organizers. These are trained with CommonCrawl and Wikipedia. with adversarial training, we ...","url":["http://universaldependencies.org/conll17/proceedings/pdf/K17-3007.pdf"]}
{"year":"2017","title":"Agree to Disagree: Improving Disagreement Detection with Dual GRUs","authors":["S Hiray, V Duppada"],"snippet":"... NLP tasks [24] [25]. For this task of (dis)agreement classification, we use GloVe embeddings of 300 dimensions trained on Common Crawl with 840 billion tokens, 2.2 million vocabulary. Page 3. 4.2. Lexicons We used affect, sentiment ...","url":["https://www.deepaffects.com/s/agree-to-disagree.pdf"]}
{"year":"2017","title":"All-but-the-Top: Simple and Effective Postprocessing for Word Representations","authors":["J Mu, S Bhat, P Viswanath - arXiv preprint arXiv:1702.01417, 2017"],"snippet":"... We test our observations on various word representations: four publicly available word representations (WORD2VEC1 (Mikolov et al., 2013) trained using Google News, GLOVE2 (Pennington et al., 2014) trained using Common Crawl, RAND-WALK (Arora et al., 2016 ...","url":["https://arxiv.org/pdf/1702.01417"]}
{"year":"2017","title":"An Empirical Analysis of NMT-Derived Interlingual Embeddings and their Use in Parallel Sentence Identification","authors":["C España-Bonet, ÁC Varga, A Barrón-Cedeño… - arXiv preprint arXiv: …, 2017","J van Genabith, A Barron-Cedeno, C España-Bonet…"],"snippet":"... context vectors. The parallel corpus includes data from United Na- tions (Rafalovitch and Dale, 2009), Common Crawl2, News Commentary3 and IWSLT4. We train system S1-w after cleaning and tokenising the texts. We ...","url":["https://arxiv.org/pdf/1704.05415","https://deepai.org/publication/an-empirical-analysis-of-nmt-derived-interlingual-embeddings-and-their-use-in-parallel-sentence-identification"]}
{"year":"2017","title":"An End-to-End Neural Architecture for Reading Comprehension","authors":["M Burkle, M Camacho, N Danyliw"],"snippet":"... referencing a fixed sized vocabulary using GloVe word embeddings from the Wikipedia 6B word dataset or the CommonCrawl 840B word ... Wikipedia corpus of ∼6 billion words, we moved to the 300-dimensional word embeddings trained on the Common Crawl vocabulary of ...","url":["http://web.stanford.edu/class/cs224n/reports/2761845.pdf"]}
{"year":"2017","title":"An In-Depth Experimental Comparison of RNTNs and CNNs for Sentence Modeling","authors":["Z Ahmadi, M Skowron, A Stier, S Kramer"],"snippet":"... On other datasets, we use the model trained on the web data from Common Crawl which contains a case-sensitive vocabulary of size 2.2 million. Experiments show that RNTNs work best when the word vector dimension is set between 25 and 35 [11]. ...","url":["http://www.ofai.at/~marcin.skowron/papers/DS2017.pdf"]}
{"year":"2017","title":"An overview of Lithuanian Internet media n-gram corpus","authors":["I Bumbuliene, L Boizou, J Mandravickaite, T Krilavicius - 2017"],"snippet":"... 19(1), pp. 61-93, 2013. [2] C. Buck, K. Heafield, B. Van Ooyen, “N-gram counts and language models from the common crawl,” in LREC, vol. 2. Citeseer, p. 4, 2014. [3] A. Pauls, D. Klein, “Faster and smaller n-gram language models,” in Proc. ...","url":["http://ceur-ws.org/Vol-1853/p05.pdf"]}
{"year":"2017","title":"Analogy Mining for Specific Design Needs","authors":["K Gilon, FY Ng, J Chan, HL Assaf, A Kittur, D Shahaf - arXiv preprint arXiv …, 2017"],"snippet":"… We use Glove pre-trained on the Common Crawl dataset (840B tokens, 300d vectors)1. We then normalize each document vector, and calculate cosine similarity (which is the same as Euclidean distance in this case) between the resulting vectors for each seed and all other …","url":["https://arxiv.org/pdf/1712.06880"]}
{"year":"2017","title":"Analysing and Improving embedded Markup of Learning Resources on the Web","authors":["S Dietze, D Taibi, R Yu, P Barker, M d'Aquin - 2017"],"snippet":"... 6 http://commoncrawl.org/ 7 http://grouper.ieee.org/groups/ltsc/wg12/20020612-Final-LOMDraft.html 8 https://www.imsglobal.org ... The Web Data Commons [1], a recent initiative investigating the Common Crawl, ie a Web crawl of approximately 2 billion HTML pages from over ...","url":["https://www.researchgate.net/profile/Stefan_Dietze/publication/313964715_Analysing_and_Improving_embedded_Markup_of_Learning_Resources_on_the_Web/links/58b05d1545851503be97ddfc/Analysing-and-Improving-embedded-Markup-of-Learning-Resources-on-the-Web.pdf"]}
{"year":"2017","title":"Analysis of semantic URLs to support automated linking of structured data on the web","authors":["S Lynden - Proceedings of the 7th International Conference on …, 2017"],"snippet":"... The Web Data Commons [13] effort to study the evolution of structured data on the web analyse the Common Crawl Web Corpus annually, most recently finding that about 38% of web pages contain some form of structured data. ...","url":["http://dl.acm.org/citation.cfm?id=3102265"]}
{"year":"2017","title":"Analyzing Movie Reviews Sentiment","authors":["D Sarkar, R Bali, T Sharma - Practical Machine Learning with Python, 2018"],"snippet":"In this chapter, we continue with our focus on case-study oriented chapters, where we will focus on specific real-world problems and scenarios and how we can use Machine Learning to solve them. We wil.","url":["https://link.springer.com/chapter/10.1007/978-1-4842-3207-1_7"]}
{"year":"2017","title":"Analyzing Neural MT Search and Model Performance","authors":["J Niehues, E Cho, TL Ha, A Waibel - ACL 2017, 2017"],"snippet":"... For the single models, we apply the early stopping based on the validation score. The baseline system is trained on the WMT parallel data, namely EPPS, NC, CommonCrawl and TED corpus. As validation data we used the newstest13 set from IWSLT evaluation campaign. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-32.pdf#page=23"]}
{"year":"2017","title":"Analyzing the compositional properties of word embeddings","authors":["T Scheepers, E Gavves, E Kanoulas"],"snippet":"... 3For GloVe we used the representations from the Common Crawl which has 840B tokens and a vocabulary of 2.2M. ... trained on used news data, where fastText and GloVe use more definitional data, Wikipedia and Common Crawl respectively. ...","url":["https://thijs.ai/papers/scheepers-gavves-kanoulas-analyzing-compositional-properties.pdf"]}
{"year":"2017","title":"Any-gram Kernels for Sentence Classification: A Sentiment Analysis Case Study","authors":["R Kaljahi, J Foster - arXiv preprint arXiv:1712.07004, 2017"],"snippet":"We use cosine similarity for word embedding similarities and the GloVe (Pennington et al., 2014) Common Crawl (1.9M vocabulary) word embeddings with a dimensionality of 300.6 … (2014). The GloVe Common Crawl vectors, however, performed better. Page 7","url":["https://arxiv.org/pdf/1712.07004"]}
{"year":"2017","title":"AraVec: A set of Arabic Word Embedding Models for use in Arabic NLP","authors":["AB Soliman, K Eissa, SR El-Beltagy - Linguistics, 2017"],"snippet":"... Here it is important to note that the Common Crawl project does not provide any technique for identifying or selecting the language of web pages to ... 5 http://www.internetworldstats.com/stats19. htm 6 http://www.internetworldstats.com/stats5.htm 7 http://commoncrawl.org 8 https ...","url":["https://www.researchgate.net/profile/Samhaa_El-Beltagy2/publication/319880027_AraVec_A_set_of_Arabic_Word_Embedding_Models_for_use_in_Arabic_NLP/links/59bfef730f7e9b48a29ba3a8/AraVec-A-set-of-Arabic-Word-Embedding-Models-for-use-in-Arabic-NLP.pdf"]}
{"year":"2017","title":"Architecting for Performance Clarity in Data Analytics Frameworks","authors":["K Ousterhout - 2017"],"snippet":"Page 1. Architecting for Performance Clarity in Data Analytics Frameworks Kay Ousterhout Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2017-158 http://www2 ...","url":["https://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-158.pdf"]}
{"year":"2017","title":"Archival Crawlers and JavaScript: Discover More Stuff but Crawl More Slowly","authors":["JF Brunelle, MC Weigle, ML Nelson - Digital Libraries (JCDL), 2017 ACM/IEEE Joint …, 2017"],"snippet":"... If our method was applied to the July 2015 Common Crawl dataset, a web-scale archival crawler will discover an additional 7.17 PB (5.12 times more) of information per year. This illustrates the significant increase in resources necessary for more thorough archival crawls. ...","url":["http://ieeexplore.ieee.org/abstract/document/7991554/"]}
{"year":"2017","title":"Are distributional representations ready for the real world? Evaluating word vectors for grounded perceptual meaning","authors":["L Lucy, J Gauthier - arXiv preprint arXiv:1705.11168, 2017"],"snippet":"... attributive features. Collell and Moens (2016) find that word representations fail to pre- # word tokens # word types GloVe (Common Crawl) 840B 2.2M GloVe (Wiki+Gigaword) 6B 400K word2vec 100B 3M Table 1: Statistics ...","url":["https://arxiv.org/pdf/1705.11168"]}
{"year":"2017","title":"Assessing Convincingness of Arguments in Online Debates with Limited Number of Features","authors":["LA Chalaguine, C Schulz"],"snippet":"... 6http://nlp.stanford.edu/projects/glove/ 7http://commoncrawl.org/ 8because including stems, lemmas or both had no impact on the results we included stems only in our “top feature set” because they are less expensive to compute 80 Page 7. ture resulted in 66% in our case. ...","url":["https://www.aclweb.org/anthology/E/E17/E17-4008.pdf"]}
{"year":"2017","title":"ATOL: A Framework for Automated Analysis and Categorization of the Darkweb Ecosystem","authors":["SGPPV Yegneswaran, KNA Das - 2017"],"snippet":"... 2016), and an open repository of (non-onion) Web crawling data, called Common Crawl (Common Crawl Foundation 2016). Using these data sources as starting points, we developed tools to acquire additional onion addresses both from the onion Web and the open Web. ...","url":["http://www.csl.sri.com/users/shalini/atol_aics17_cameraready.pdf"]}
{"year":"2017","title":"Attention-based Dialog Embedding for Dialog Breakdown Detection","authors":["C Park, K Kim, S Kim"],"snippet":"… sentence. We used GloVe vectors of dimension 100 trained by the Twitter data. We used one from Twitter data rather than Common Crawl data be- cause it is more closely related to the general chat domain of our task. After","url":["http://workshop.colips.org/dstc6/papers/track3_paper14_park.pdf"]}
{"year":"2017","title":"Attributes2Classname: A discriminative model for attribute-based unsupervised zero-shot learning","authors":["B Demirel, RG Cinbis, NI Cinbis - arXiv preprint arXiv:1705.01734, 2017"],"snippet":"... For each class and attribute name, we generate a 300-dimensional word embedding vector using GloVe [26] based on Common Crawl Data2 ... 2http://commoncrawl.org/the-data/ 3http://nlp.stanford. edu/projects/glove/ 4We will release our code and models upon publication. ...","url":["https://arxiv.org/pdf/1705.01734"]}
{"year":"2017","title":"Automated Categorization of Onion Sites for Analyzing the Darkweb Ecosystem","authors":["S Ghosh, A Das, P Porras, V Yegneswaran, A Gehani - 2017"],"snippet":"... Our sources of seed data include various published onion datasets( [32], [5], [25], [22]), .onion references from a large collection of recursive DNS resolvers [17], and an open repository of (non-onion) web crawling data, called Common Crawl [11]. ...","url":["http://www.csl.sri.com/users/gehani/papers/KDD-2017.Onions.pdf"]}
{"year":"2017","title":"Automatic Learning Content Sequence via Linked Open Data","authors":["R Manrique"],"snippet":"... in [14, 13]. We also plan to use a recent release dataset [5] that contains all embedded Learning Resource Metadata Initiative (LRMI)4 markup statements extracted from the Common Crawl releases 2013-2015. Each entity description ...","url":["https://iswc2017.ai.wu.ac.at/wp-content/uploads/papers/DC/paper_19.pdf"]}
{"year":"2017","title":"Automatic Threshold Detection for Data Selection in Machine Translation","authors":["MS Duma, W Menzel - WMT 2017, 2017"],"snippet":"... The in-domain corpora were made available by the competition and the general domain corpora we have chosen to select data from are the Wikipedia corpora (Wolk and Marasek, 2014) and the Commoncrawl corpora1. Experiments 1http://commoncrawl. org/ 483 Page 508. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=507"]}
{"year":"2017","title":"Better Text Understanding Through Image-To-Text Transfer","authors":["K Kurach, S Gelly, M Jastrzebski, P Haeusser… - arXiv preprint arXiv: …, 2017"],"snippet":"... We used word embeddings obtained from three methods: • Glove: embeddings proposed in [20], trained on a Common Crawl dataset with 840 billion tokens. • M-Skip-Gram: embeddings proposed in [12], trained on Wikipedia and a set of images from ImageNet. ...","url":["https://arxiv.org/pdf/1705.08386"]}
{"year":"2017","title":"Biasing Attention-Based Recurrent Neural Networks Using External Alignment Information","authors":["T Alkhouli, H Ney"],"snippet":"... (1). We use the full bilingual data of the EnglishRomanian task. For the GermanEnglish task, we choose the common crawl, news commentary and European parliament bilingual data. ... This is to remove noisy sentence pairs that are frequent in the common crawl corpus. ...","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1036/Alkhouli-WMT%202017-2017.pdf"]}
{"year":"2017","title":"Big Data","authors":["R Womack"],"snippet":"Page 1. Topics in Data Science / Өгөгдлийн шинжлэх ухаан Rutgers University has made this article freely available. Please share how this access benefits you. Your story matters. [https://rucore.libraries.rutgers.edu/rutgers-lib/52378/story/] …","url":["https://rucore.libraries.rutgers.edu/rutgers-lib/52378/PDF/1/play/"]}
{"year":"2017","title":"Big Data: A Very Short Introduction","authors":["DE Holmes - 2017"]}
{"year":"2017","title":"Bilateral Multi-Perspective Matching for Natural Language Sentences","authors":["Z Wang, W Hamza, R Florian - arXiv preprint arXiv:1702.03814, 2017"],"snippet":"... 4.5. 4.1 Experiment Settings We initialize the word embeddings in the word representation layer with the 300-dimensional GloVe word vectors pretrained from the 840B Common Crawl corpus [Pennington et al., 2014]. For ...","url":["https://arxiv.org/pdf/1702.03814"]}
{"year":"2017","title":"Bilingual Word Embeddings for Bilingual Terminology Extraction from Specialized Comparable Corpora","authors":["A Hazem, E Morin - Proceedings of the Eighth International Joint …, 2017"],"snippet":"… and economic commentary crawled from the web (NC), Europarl corpus is a parallel corpus extracted from the proceedings of the European Parliament (EP7), JRC acquis corpus is a collection of legislative European Union documents (JRC) and Common Crawl corpus (CC","url":["http://www.aclweb.org/anthology/I17-1069"]}
{"year":"2017","title":"BLEU2VEC: the Painfully Familiar Metric on Continuous Vector Space Steroids","authors":["A Tättar, M Fishel - WMT 2017, 2017"],"snippet":"... modifications. data from the WMT'2017 news translation shared task: we took a random 50 million sentences from the News Crawl corpora for each language (ex- cept Chinese, where we used a portion of Common Crawl). While ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=643"]}
{"year":"2017","title":"Bloom Filters for ReduceBy, GroupBy and Join in Thrill","authors":["A Noe, DIDMT Bingmann - 2017"],"snippet":"Page 1. Master thesis Bloom Filters for ReduceBy, GroupBy and Join in Thrill Alexander Noe Date: 12. January 2017 Supervisors: Prof. Dr. Peter Sanders Dipl. Inform. Dipl. Math. Timo Bingmann Institute of Theoretical Informatics, Algorithmics Department of Informatics ...","url":["https://pdfs.semanticscholar.org/bf34/8f2819a740ba3a473314b4eab616c421c9e1.pdf"]}
{"year":"2017","title":"Bootstrapping Chatbots for Novel Domains","authors":["P Babkin, MFM Chowdhury, A Gliozzo, M Hirzel… - Workshop at NIPS on …, 2017"],"snippet":"… between the corresponding dense vectors. We used 300-dimensional vectors pre-trained with the GloVe algorithm [19] on the Common Crawl corpus that come with the gensim Python library [20]. By applying this model, for …","url":["https://www.researchgate.net/profile/Avraham_Shinnar/publication/321664993_Bootstrapping_Chatbots_for_Novel_Domains/links/5a29ff24a6fdccfbbf81994a/Bootstrapping-Chatbots-for-Novel-Domains.pdf"]}
{"year":"2017","title":"Building a Web-Scale Dependency-Parsed Corpus from CommonCrawl","authors":["A Panchenko, E Ruppert, S Faralli, SP Ponzetto… - arXiv preprint arXiv: …, 2017"],"snippet":"Abstract: We present DepCC, the largest to date linguistically analyzed corpus in English including 365 million documents, composed of 252 billion tokens and 7.5 billion of named entity occurrences in 14.3 billion sentences from a web-scale crawl of the CommonCrawl","url":["https://arxiv.org/pdf/1710.01779"]}
{"year":"2017","title":"Building Lexical Vector Representations from Concept Definitions","authors":["DS Carvalho, M Le Nguyen"],"snippet":"... This parameter was adjusted using the training set for MEN, or inside each CV fold for the rest. • Both Word2Vec and GloVe were used with pre-trained, 300-dimensional models: 100 billion words GoogleNews corpus and Common Crawl 42 billion token corpus respectively. ...","url":["http://www.aclweb.org/anthology/E/E17/E17-1085.pdf"]}
{"year":"2017","title":"Byte-based Neural Machine Translation","authors":["MR Costa-jussà, C Escolano, JAR Fonollosa - Proceedings of the First Workshop on …, 2017"],"snippet":"... English. For the three language pairs, we used all data parallel data provided in the evaluation. For German-English, we used: europarl v.7, news commentary v.12, common crawl and rapid corpus of EU press re- leases. For ...","url":["http://www.aclweb.org/anthology/W17-4123"]}
{"year":"2017","title":"Can word vectors help corpus linguists?","authors":["G Desagulier - 2017"],"snippet":"… If we follow the distributional hypothesis, this means that the words have similar meanings. The quality of the vector representation is a function of the number 2It is sampled from a matrix of vectors obtained with GloVe (see below) on the basis of the Common Crawl dataset. 6 …","url":["https://halshs.archives-ouvertes.fr/halshs-01657591/document"]}
{"year":"2017","title":"Characterisation of mental health conditions in social media using Informed Deep Learning","authors":["G Gkotsis, A Oellrich, S Velupillai, M Liakata… - Scientific Reports, 2017"],"snippet":"... We considered pre-trained word vectors as input to the classifiers (eg using Glove's Common Crawl containing 840 Billion tokens 17 ), but the results did not improve. We attribute this to the size of our dataset which is adequate for representing the language within the corpus. ...","url":["http://www.nature.com/srep/2017/170322/srep45141/full/srep45141.html"]}
{"year":"2017","title":"Classification of keywords","authors":["I Prémont-schwarz, A Thakur, M Tober - US Patent 9,798,820, 2017"],"snippet":"The resource contents module 320 may automatically acquire a plurality of resources. The resources may, for example, be Wikipedia articles and be acquired from Wikipedia.org, or an open repository of web crawl data such as CommonCrawl.org …","url":["http://www.freepatentsonline.com/9798820.html"]}
{"year":"2017","title":"Classification of search queries","authors":["A Thakur, M Tober - US Patent 9,767,182, 2017"],"snippet":"... The resource contents module 320 may automatically acquire a plurality of resources. The resources may, for example, be Wikipedia articles and be acquired from Wikipedia.org, or an open repository of web crawl data such as CommonCrawl.org. ...","url":["http://www.freepatentsonline.com/9767182.html"]}
{"year":"2017","title":"Classifier Stacking for Native Language Identification","authors":["W Li, L Zou - Bronze Sponsors, 2017"],"snippet":"... Word embeddings We use the Common Crawl (42B tokens, 1.9 M vocab, uncased, 300d vectors) in GloVe (global vectors for word representation)(Pennington et al., 2014) to produce feature vectors for each essay, with the help of GensimRehurek and Sojka, 2010). ...","url":["http://www.aclweb.org/anthology/W/W17/W17-50.pdf#page=410"]}
{"year":"2017","title":"Classifying Phishing URLs Using Recurrent Neural Networks","authors":["AC Bahnsen, EC Bohorquez, S Villegas, J Vargas"],"snippet":"... Sections 1PhishTank (https://www.phishtank.com/) 2Common Crawl (http://commoncrawl.org/) 978-1-5386-2701-3/17/$31.00 c 2017 IEEE Page 2. ... Half of them legitimate and half of them phishing. The legitimate URLs came from Common Crawl, a corpus of web crawl data. ...","url":["http://albahnsen.com/files/Classifying%20Phishing%20URLs%20Using%20Recurrent%20Neural%20Networks_cameraready.pdf"]}
{"year":"2017","title":"Cloud Computing Infrastructure for Data Intensive Applications","authors":["Y Demchenko, F Turkmen, C de Laat, CH Hsu… - Big Data Analytics for …, 2017"],"snippet":"This chapter describes the general architecture and functional components of the cloud-based big data infrastructure (BDI). The chapter starts with the analysis.","url":["https://www.sciencedirect.com/science/article/pii/B9780128093931000027"]}
{"year":"2017","title":"Coattention-Based Neural Network for Question Answering","authors":["J Andress, C Zanoci"],"snippet":"... We then proceed by embedding each word using the GloVe word vectors pretrained on the 840B Common Crawl corpus [6]. We found that switching from the default 100dimensional GloVe vectors to the larger 300-dimensional representation improved the performance ...","url":["https://web.stanford.edu/class/cs224n/reports/2762015.pdf"]}
{"year":"2017","title":"Common Crawl Mining","authors":["T Dean, A Pasha, B Clarke, CJ Butenhoff - 2017"],"snippet":"The main goal behind the Common Crawl Mining system is to improve Eastman Chemical Company's ability to use timely knowledge of public concerns to inform key business decisions. It provides information to Eastman Chemical Company that is valuable for","url":["https://vtechworks.lib.vt.edu/bitstream/handle/10919/77629/ccm_source_code.zip?sequence=5&isAllowed=y"]}
{"year":"2017","title":"Common Crawled Web Corpora: Constructing corpora from large amounts of web data","authors":["KB Kristoffersen - 2017"],"snippet":"… Additionally, by using data provided by the Common Crawl Foundation, I develop a new very large English corpus with more than 135 billion tokens … 3 Exploring the Common Crawl 27 3.1 The data . . . . . 27 3.1.1 A note on scale …","url":["https://www.duo.uio.no/bitstream/handle/10852/57836/Kristoffersen_MSc2.pdf?sequence=5"]}
{"year":"2017","title":"Composition of Compound Nouns Using Distributional Semantics","authors":["K Yee, J Kalita"],"snippet":"... word2vec 300 3,000,000 100.00 bn Google News GloVe 300 400,000 42.00 bn Common Crawl HPCA 200 178,080 1.65 bn enWiki+Reuters +WSJ CW 50 130,000 0.85 bn enWiki+Reuters RCV1 word2vec 500 30,025 100 mn BNC word2vec 500 19,679 120 mn esWiki ...","url":["http://www.cs.uccs.edu/~jkalita/papers/2016/KyraYeeICON2016.pdf"]}
{"year":"2017","title":"Compressed Nonparametric Language Modelling","authors":["E Shareghi, G Haffari, T Cohn"],"snippet":"Page 1. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) 2701 Compressed Nonparametric Language Modelling Ehsan Shareghi,♣ Gholamreza Haffari,♣ Trevor Cohn♠ ♣ Faculty ...","url":["http://static.ijcai.org/proceedings-2017/0376.pdf"]}
{"year":"2017","title":"COMPRESSING WORD EMBEDDINGS VIA DEEP COMPOSITIONAL CODE LEARNING","authors":["R Shu, H Nakayama - arXiv preprint arXiv:1711.01068, 2017","RSH Nakayama"],"snippet":"… purpose. We lowercase and tokenize all texts with the nltk package. We choose the 300-dimensional uncased GloVe word vectors (trained on 42B tokens of Common Crawl data) as our baseline embeddings. The vocabulary …","url":["https://arxiv.org/pdf/1711.01068","https://pdfs.semanticscholar.org/1713/d05f9d5861cac4d5ec73151667cb03a42bfc.pdf"]}
{"year":"2017","title":"Compression with the tudocomp Framework","authors":["P Dinklage, J Fischer, D Köppl, M Löbel, K Sadakane - arXiv preprint arXiv: …, 2017"],"snippet":"Page 1. Compression with the tudocomp Framework Patrick Dinklage1, Johannes Fischer1, Dominik Köppl1, Marvin Löbel1, and Kunihiko Sadakane2 1 Department of Computer Science, TU Dortmund, Germany, pdinklag@gmail ...","url":["https://arxiv.org/pdf/1702.07577"]}
{"year":"2017","title":"Concept/Theme Roll-Up","authors":["T Sahay, R Tadishetti, A Mehta, S Jadon - 2017"],"snippet":"... representation. For word embeddings, we used GloVe trained on a common crawl corpus, containing 1900000 words in its vocabulary. ... phrases. For words, the weights were initialized with GloVe embeddings trained on the common-crawl corpus. ...","url":["https://people.cs.umass.edu/~tsahay/lexalytics_report.pdf"]}
{"year":"2017","title":"ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge","authors":["R Speer, J Lowry-Duda - arXiv preprint arXiv:1704.03560, 2017"],"snippet":"... The first source is the word2vec Google News embeddings2, and the second is the GloVe 1.2 embeddings that were trained on 840 billion tokens of the Common Crawl3. Because the input embeddings are only in En- glish, the vectors in other languages depended en- tirely on ...","url":["https://arxiv.org/pdf/1704.03560"]}
{"year":"2017","title":"CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies","authors":["D Zeman, M Popel, M Straka, J Hajic, J Nivre, F Ginter… - Proceedings of the CoNLL …, 2017"],"snippet":"... Page 3. Raw texts The supporting raw data was gathered from CommonCrawl, which is a publicly available web crawl created and maintained by the non-profit CommonCrawl foundation.2 The data is publicly available in the Amazon cloud both as raw HTML and as plain text. ...","url":["http://www.aclweb.org/anthology/K17-3001"]}
{"year":"2017","title":"Connecting the Dots: Towards Human-Level Grammatical Error Correction","authors":["S Chollampatt, HT Ng - Bronze Sponsors, 2017"],"snippet":"... Moreover, Junczys-Dowmunt and Grundkiewicz (2016) trained a web-scale language model (LM) using large corpora from the Common Crawl data (Buck et al., 2014). ... 2014. N-gram counts and language models from the Common Crawl...","url":["http://www.aclweb.org/anthology/W/W17/W17-50.pdf#page=347"]}
{"year":"2017","title":"Constructing and Evaluating a Novel Crowdsourcing-based Paraphrased Opinion Spam Dataset","authors":["S Kim, S Lee, D Park, J Kang - Proceedings of the 26th International Conference on …, 2017"],"snippet":"... the past and future context of an input are captured (Figure 3). We initialized the input word representations of our LSTM model using publicly available 300dimensional GloVe10 vectors (Pennington et al., 2014), which are trained on 840 billion tokens of Common Crawl data. ...","url":["http://dl.acm.org/citation.cfm?id=3052607"]}
{"year":"2017","title":"Context Similarity for Retrieval-Based Imputation","authors":["A Ahmadov, M Thiele, W Lehner, R Wrembel"],"snippet":"... an accurate imputation. We use Dresden Web Table Corpus (DWTC) which is comprised of more than 125 million web tables extracted from the Common Crawl as our knowledge source. The comprehensive experimental ...","url":["http://asonamdata.com/ASONAM2017_Proceedings/papers/165_1017_135.pdf"]}
{"year":"2017","title":"CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANS","authors":["P Juneja, H Jain, T Deshmukh, S Somani, BK Tripathy"],"snippet":"... Gigaword5 + Page 7. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.4, July 2017 27 Wikipedia2014 which has 6 billion tokens, and on 42 billion tokens of web data, from Common Crawl. For ...","url":["https://www.researchgate.net/profile/BK_Tripathy/publication/318815506_Context_Aware_Clustering_Using_Glove_and_K-Means/links/598037d2458515687b4f9dfd/Context-Aware-Clustering-Using-Glove-and-K-Means.pdf"]}
{"year":"2017","title":"Continuous Learning from Human Post-Edits for Neural Machine Translation","authors":["M Turchi, M Negri, MA Farajian, M Federico - The Prague Bulletin of Mathematical …, 2017"],"snippet":"... For training the En_De NMT system, we merged the Europarl v7 (Koehn, 2005) and Common Crawl datasets released for the translation task at the 2016 Workshop on Statistical Machine Translation (WMT'16 (Bojar, 2016)) and random sampled 3.5 million sentence pairs. ...","url":["https://www.degruyter.com/downloadpdf/j/pralin.2017.108.issue-1/pralin-2017-0023/pralin-2017-0023.xml"]}
{"year":"2017","title":"Convolutional Encoding in Bidirectional Attention Flow for Question Answering","authors":["DR Miller"],"snippet":"... Language Processing (EMNLP), pp. 15321543, 2014. [9] “Common Crawl.” https://commoncrawl.org/. [10] RK Srivastava, K. Greff, and J. Schmidhuber, “Highway networks,” arXiv preprint arXiv:1505.00387, 2015. [11] P. Rajpurkar, J ...","url":["http://web.stanford.edu/class/cs224n/reports/2762032.pdf"]}
{"year":"2017","title":"Cost Weighting for Neural Machine Translation Domain Adaptation","authors":["B Chen, C Cherry, G Foster, S Larkin - ACL 2017, 2017"],"snippet":"... which contains 3003 sentence pairs. The training data contain 12 million sentence pairs, composed of various sub-domains, such as news commentary, Europarl, UN, common crawl web data, etc. In the corpus weighting adaptation ...","url":["http://www.aclweb.org/anthology/W/W17/W17-32.pdf#page=52"]}
{"year":"2017","title":"Counterfactual Learning for Machine Translation: Degeneracies and Solutions","authors":["C Lawrence, P Gajane, S Riezler - arXiv preprint arXiv:1711.08621, 2017"],"snippet":"… signal. Experiments are conducted on two language pairs. The first is German-to-English and its baseline system is trained on the concatenation of the Europarl corpus, the Common Crawl corpus and the News corpus. The","url":["https://arxiv.org/pdf/1711.08621"]}
{"year":"2017","title":"Counterfactual learning from bandit feedback under deterministic logging: A case study in statistical machine translation","authors":["C Lawrence, A Sokolov, S Riezler - arXiv preprint arXiv:1707.09118, 2017"],"snippet":"... We conduct two SMT tasks with hypergraph re-decoding: The first is German-to-English and is trained using a concatenation of the Europarl corpus (Koehn, 2005), the Common Crawl corpus3 and the News Commentary corpus (Koehn and Schroeder, 2007). ...","url":["https://arxiv.org/pdf/1707.09118"]}
{"year":"2017","title":"Critical review of various near-duplicate detection methods in web crawl and their prospective application in drug discovery","authors":["L Pamulaparty, CVG Rao, MS Rao - International Journal of Biomedical Engineering …, 2017"],"snippet":"... Smith, JR, Saint-Amand, H., Plamada, M. and Lopez, A. (2013) 'Dirt cheap web-scale parallel text from the common crawl', Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), Association for Computational Linguistics, Sofia ...","url":["http://www.inderscienceonline.com/doi/abs/10.1504/IJBET.2017.087723"]}
{"year":"2017","title":"CS 224N Assignment 4: Question Answering on SQuAD","authors":["RA Ozturk, HA Inan, K Garbe"],"snippet":"... Fixing this problem partly by using all of the 400k words resulted in increased performance naturally, but there is still a performance gap between our dev performance and test performance. We should also note that 1 uses the Common Crawl dataset (2.2m words). ...","url":["https://web.stanford.edu/class/cs224n/reports/2761126.pdf"]}
{"year":"2017","title":"CS224N Project: Natural Language Inference for Quora Dataset","authors":["KHK Yoo, MM Almajid, ZY Wong"],"snippet":"... Most of the results were obtained using the smaller vocabulary of 6 billion tokens obtained from Wikipedia and Gigaword 5, while there exists a Common Crawl version which has 840B tokens with an embedding size of 300. ...","url":["https://web.stanford.edu/class/cs224n/reports/2755939.pdf"]}
{"year":"2017","title":"CUNI System for the WMT17 Multimodal Translation Task","authors":["J Helcl, J Libovický - arXiv preprint arXiv:1707.04550, 2017"],"snippet":"... By scoring the German part of several parallel corpora (EU Bookshop (Skadinš et al., 2014), News Commentary (Tiedemann, 2012) and CommonCrawl (Smith et al., 2013)), we were only able to retrieve a few hundreds of in-domain sentences. ...","url":["https://arxiv.org/pdf/1707.04550"]}
{"year":"2017","title":"D1. 1: Report on Building Translation Systems for Public Health Domain","authors":["O Bojar, B Haddow, D Marecek, R Sudarikov… - 2017"],"snippet":"Page 1. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 644402. D1.1: Report on Building Translation Systems for Public Health Domain ...","url":["http://www.himl.eu/files/D1.1-report-on-building-translation-systems.pdf"]}
{"year":"2017","title":"Data Integration for Open Data on the Web","authors":["S Neumaier, A Polleres, S Steyskal, J Umbrich"],"snippet":"... However, some Web crawls have been made openly available, such as the Common Crawl corpus which contains “petabytes of data collected over the last 7 years”10. ... 10http://commoncrawl.org/, last accessed 30/03/2017 Page 5. Table 1: Top-10 portals, ordered by datasets. ...","url":["https://aic.ai.wu.ac.at/~polleres/publications/neum-etal-RW2017.pdf"]}
{"year":"2017","title":"Data Selection Strategies for Multi-Domain Sentiment Analysis","authors":["S Ruder, P Ghaffari, JG Breslin - arXiv preprint arXiv:1702.02426, 2017"],"snippet":"... a linear SVM classifier (Blitzer et al., 2006). We use GloVe vectors (Pennington et al., 2014) pre-trained on 42B tokens of the Common Crawl corpus7 for our word embeddings. For the auto-encoder representations, we use a ...","url":["https://arxiv.org/pdf/1702.02426"]}
{"year":"2017","title":"DCN+: Mixed Objective and Deep Residual Coattention for Question Answering","authors":["C Xiong, V Zhong, R Socher - arXiv preprint arXiv:1711.00106, 2017"],"snippet":"... Manning et al., 2014). For word embeddings, we use GloVe embeddings pretrained on the 840B Common Crawl corpus (Pennington et al., 2014) as well as character ngram embeddings by Hashimoto et al. (2017). In addition, we ...","url":["https://arxiv.org/pdf/1711.00106"]}
{"year":"2017","title":"Deep Almond: A Deep Learning-based Virtual Assistant","authors":["GCR Ramesh"],"snippet":"... Preprocessing was implemented in Java using CoreNLP [22]. We use the pretrained GloVe [23] vectors of size 300 trained on Common Crawl as our word vectors, and we do not train the word vectors. 5.1 Model Validation & Tuning ...","url":["https://web.stanford.edu/class/cs224n/reports/2748325.pdf"]}
{"year":"2017","title":"Deep Learning for User Comment Moderation","authors":["J Pavlopoulos, P Malakasiotis, I Androutsopoulos - arXiv preprint arXiv:1705.09993, 2017"],"snippet":"... 11See https://nlp.stanford.edu/projects/ glove/. We use 'Common Crawl' (840B tokens). 12For Gazzetta, words encountered only once in the training set (G-TRAIN-L or G-TRAIN-S) are also treated as OOV. ta : accept threshold tr : reject threshold 0.0 1.0 reject gray accept ...","url":["https://arxiv.org/pdf/1705.09993"]}
{"year":"2017","title":"Deep Neural Machine Translation with Linear Associative Unit","authors":["M Wang, Z Lu, J Zhou, Q Liu - arXiv preprint arXiv:1705.00861, 2017","MWZLJ Zhou, Q Liu"],"snippet":"... translation are presented in Table 2. We compare our NMT systems with various other systems including the winning system in WMT14 (Buck et al., 2014), a phrase-based system whose language models were trained on a huge monolingual text, the Common Crawl corpus. ...","url":["http://www.aclweb.org/anthology/P/P17/P17-1013.pdf","https://arxiv.org/pdf/1705.00861"]}
{"year":"2017","title":"Deeper Attention to Abusive User Content Moderation","authors":["J Pavlopoulos, P Malakasiotis, I Androutsopoulos - 2017"],"snippet":"... (2017a). 14We implemented the methods of this sub-section using Keras (keras.io) and TensorFlow (tensorflow.org). 15See https://nlp.stanford.edu/projects/ glove/. We use 'Common Crawl' (840B tokens). Page 6. ta : accept threshold tr : reject threshold 0.0 1.0 reject gray accept ...","url":["http://nlp.cs.aueb.gr/pubs/emnlp2017.pdf"]}
{"year":"2017","title":"DeepSpace: Mood-Based Image Texture Generation for Virtual Reality from Music","authors":["M Sra, P Vijayaraghavan, O Rudovic, P Maes, D Roy - Computer Vision and Pattern …, 2017"],"snippet":"... task. We use the GloVe model trained on a common crawl dataset7 for the representation for words in the descriptive labels and mood. ... This approach of tractably modeling a joint distribution of 7http://commoncrawl.org/the-data/ pixels ...","url":["http://ieeexplore.ieee.org/abstract/document/8015017/"]}
{"year":"2017","title":"Denoising Clinical Notes for Medical Literature Retrieval with Convolutional Neural Model","authors":["L Soldaini, A Yates, N Goharian - 2017"],"snippet":"... Two source of evidence were used to obtain, for each term qi , its word embedding xi : GloVe vectors [10] pre-trained on the common crawl corpus4 and SkipGram vectors pre-trained on PubMed5. We found that concatenating domain-speci c with domain-agnostic embeddings ...","url":["http://ir.cs.georgetown.edu/downloads/cikm17-cds-notes.pdf"]}
{"year":"2017","title":"Deriving Neural Architectures from Sequence and Graph Kernels","authors":["T Lei, W Jin, R Barzilay, T Jaakkola - arXiv preprint arXiv:1705.09037, 2017"],"snippet":"Page 1. Deriving Neural Architectures from Sequence and Graph Kernels Tao Lei* 1 Wengong Jin* 1 Regina Barzilay 1 Tommi Jaakkola 1 Abstract The design of neural architectures for structured objects is typically guided by experimental in- sights rather than a formal process. ...","url":["https://arxiv.org/pdf/1705.09037"]}
{"year":"2017","title":"Determining Entailment of Questions in the Quora Dataset","authors":["A Tung, E Xu"],"snippet":"... We used the 840B common crawl GloVe pretrained embeddings https://nlp.stanford.edu/projects/ glove/, the starter code from CS224N http://web.stanford.edu/class/cs224n/, and tuned the hyper-parameters on these various models to achieve the optimal accuracy. ...","url":["https://web.stanford.edu/class/cs224n/reports/2748301.pdf"]}
{"year":"2017","title":"Distance-Aware Selective Online Query Processing Over Large Distributed Graphs","authors":["X Zhang, L Chen - Data Science and Engineering"],"url":["http://link.springer.com/article/10.1007/s41019-016-0023-z"]}
{"year":"2017","title":"Distinguishing “good” from “bad” Arguments in Online Debates & Feature Analysis using Feed-Forward Neural Networks","authors":["LA Chalaguine"],"snippet":"Page 1. Imperial College London Department of Computing Distinguishing “good” from “bad” Arguments in Online Debates & Feature Analysis using Feed-Forward Neural Networks Lisa Andreevna Chalaguine Supervisor: Claudia Schulz ...","url":["https://pdfs.semanticscholar.org/526f/468ebe630e10221dc77f21ce65aba72e0021.pdf"]}
{"year":"2017","title":"Distributed Algorithms on Exact Personalized PageRank","authors":["T Guo, X Cao, G Cong, J Lu, X Lin"],"snippet":"Page 1. Distributed Algorithms on Exact Personalized PageRank Tao Guo1 Xin Cao2 Gao Cong1 Jiaheng Lu3 Xuemin Lin2 1 School of Computer Science and Engineering, Nanyang Technological University, Singapore 2 School ...","url":["https://www.cs.helsinki.fi/u/jilu/documents/SIGMOD2017.pdf"]}
{"year":"2017","title":"Distributed Computing in Social Media Analytics","authors":["M Riemer - Distributed Computing in Big Data Analytics, 2017"],"snippet":"... For example, [16] the current state of the art Twitter sentiment analysis technique leverages knowledge from a Common Crawl of the internet, Movie Reviews, Emoticons, and a human defined rule logic model to drastically improve the performance of its recurrent neural network ...","url":["https://link.springer.com/chapter/10.1007/978-3-319-59834-5_8"]}
{"year":"2017","title":"Document Context Neural Machine Translation with Memory Networks","authors":["S Maruf, G Haffari - arXiv preprint arXiv:1711.03688, 2017"],"snippet":"Page 1. Document Context Neural Machine Translation with Memory Networks Sameen Maruf and Gholamreza Haffari Faculty of Information Technology, Monash University, VIC, Australia {firstname.lastname}@monash.edu.au Abstract …","url":["https://arxiv.org/pdf/1711.03688"]}
{"year":"2017","title":"Domain Adaptation for Multilingual Neural Machine Translation","authors":["AC Varga - 2017"],"snippet":"Page 1. Universität des Saarlandes Universidad del Pa´ıs Vasco/Euskal Herriko Unibertsitatea Domain Adaptation for Multilingual Neural Machine Translation Master's Thesis submitted in fulfillment of the degree requirements of the ...","url":["https://www.clubs-project.eu/assets/publications/other/MSc_thesis_AdamVarga.pdf"]}
{"year":"2017","title":"Don't Let One Rotten Apple Spoil the Whole Barrel: Towards Automated Detection of Shadowed Domains","authors":["D Liu, Z Li, K Du, H Wang, B Liu, H Duan - 2017"],"snippet":"Page 1. Don't Let One Rotten Apple Spoil the Whole Barrel: Towards Automated Detection of Shadowed Domains Daiping Liu University of Delaware [email protected] Zhou Li ACM Member [email protected] Kun Du Tsinghua University [email protected] ...","url":["https://www.eecis.udel.edu/~dpliu/papers/ccs17.pdf"]}
{"year":"2017","title":"Doubly-Attentive Decoder for Multi-modal Neural Machine Translation","authors":["I Calixto, Q Liu, N Campbell - arXiv preprint arXiv:1702.01287, 2017"],"snippet":"... M sentence pairs (Bojar et al., 2015). These include the Eu- roparl v7 (Koehn, 2005), News Commentary and Common Crawl corpora, which are concatenated and used for pre-training. We use the scripts in the Moses SMT ...","url":["https://arxiv.org/pdf/1702.01287"]}
{"year":"2017","title":"Dynamic Coattention Networks for Reading Comprehension","authors":["H Tepanyan"],"snippet":"... using linear decoder. We use this final version with Common Crawl 840B glove vector embeddings from [2] to achieve the final scores of F1 = 58.2% and EM = 44.5% scores on the dev set. 1 Model 1. Simple Baseline Below we ...","url":["http://web.stanford.edu/class/cs224n/reports/2743745.pdf"]}
{"year":"2017","title":"Dynamic Coattention with Sentence Information","authors":["A Ruch"],"snippet":"... choice for document length. Embeddings: We used the GloVe word embeddings for the Common Crawl 840B dataset and explored initializing the embeddings of unseen words to zero or to a random vector. Intuitively using a ...","url":["https://pdfs.semanticscholar.org/eef0/e42394c625772f5b220797661aba893012f4.pdf"]}
{"year":"2017","title":"Dynamic Data Selection for Neural Machine Translation","authors":["M van der Wees, A Bisazza, C Monz - arXiv preprint arXiv:1708.00712, 2017"],"snippet":"... The WMT training corpus contains Commoncrawl, Europarl, and News Commentary but no in-domain news data. ... We train our systems on a mixture of domains, comprising Commoncrawl, Europarl, News Commentary, EMEA, Movies, and TED. ...","url":["https://arxiv.org/pdf/1708.00712"]}
{"year":"2017","title":"Dynamic Space Efficient Hashing","authors":["T Maier, P Sanders - arXiv preprint arXiv:1705.00997, 2017"],"snippet":"Page 1. Dynamic Space Efficient Hashing Tobias Maier and Peter Sanders Karlsruhe Institute of Technology, Karlsruhe, Germany {t.maier,sanders}@kit.edu Abstract We consider space efficient hash tables that can grow and ...","url":["https://arxiv.org/pdf/1705.00997"]}
{"year":"2017","title":"Effect of Data Imbalance on Unsupervised Domain Adaptation of Part-of-Speech Tagging and Pivot Selection Strategies","authors":["X Cui, F Coenen, D Bollegala - Journal of Machine Learning Research, 2017"],"snippet":"... (2016) to train the final adaptive classifier f only by projected features to reduce the dimensionality, where θx ∈ Rh. We use d = 300 dimensional GloVe (Pennington et al., 2014) embeddings (trained using 42B tokens from the Common Crawl) as word representations. ...","url":["https://cgi.csc.liv.ac.uk/~frans/PostScriptFiles/lidta2017.pdf"]}
{"year":"2017","title":"Energy-Efficient Data Transfer Algorithms for HTTP-Based Services","authors":["T Kosar, I Alan - arXiv preprint arXiv:1707.05730, 2017"],"snippet":"... Three different representative datasets were used during experiments in order to capture the throughput and power consumption differences based on the dataset type: (i) the HTML dataset is a set of raw HTML files from the Common Crawl project [3]; (ii) the image dataset is a ...","url":["https://arxiv.org/pdf/1707.05730"]}
{"year":"2017","title":"Entity linking across vision and language","authors":["AN Venkitasubramanian, T Tuytelaars, MF Moens - Multimedia Tools and …, 2017"],"snippet":"... 5.2 Using a hypernym database The second approach for detecting relevant mentions uses the WebIsADb database [40] containing more than 400 million hypernymy relations extracted from the CommonCrawl web corpus. ...","url":["http://link.springer.com/article/10.1007/s11042-017-4732-8"]}
{"year":"2017","title":"Estimating Missing Temporal Meta-Information using Knowledge-Based-Trust","authors":["Y Oulabi, C Bizer"],"snippet":"... We acquired data from the sources either by manually written crawlers and extractors, or through data dumps. 5.2 Web Table Corpus For our experiments we use the Web Data Commons Web Table Corpus from 20153, which was extracted from the July 2015 Common Crawl...","url":["https://pdfs.semanticscholar.org/9016/87f0f79efd175d6c3b9efba4e254e4bc410a.pdf"]}
{"year":"2017","title":"Evaluating Story Generation Systems Using Automated Linguistic Analyses","authors":["M Roemmele, AS Gordon, R Swanson"],"snippet":"... We specifically used the GloVe embedding vectors [45] trained on the Common Crawl corpus8. We computed the mean cosine similarity of the vectors for all pairs of content words between a generated sentence and its context (Metric 11). ...","url":["http://people.ict.usc.edu/~roemmele/publications/fiction_generation.pdf"]}
{"year":"2017","title":"Evaluating vector-space models of analogy","authors":["D Chen, JC Peterson, TL Griffiths - arXiv preprint arXiv:1705.04416, 2017"],"snippet":"... We used the 300-dimensional word2vec vectors trained on the Google News corpus that were provided by Google (Mikolov et al., 2013), and the 300-dimensional GloVe vectors trained on a Common Crawl web crawl corpus that were provided by Pennington et al. (2014). ...","url":["https://arxiv.org/pdf/1705.04416"]}
{"year":"2017","title":"Evaluation of a Feedback Algorithm inspired by Quantum Detection for Dynamic Search Tasks","authors":["E Di Buccio, M Melucci"],"snippet":"... Polar Domain and the Ebola Do- main. Each dataset is formatted using the Common Crawl Architecture schema from the DARPA MEMEX project, and stored as sequences of CBOR objects. The Ebola dataset refers to the outbreak ...","url":["http://trec.nist.gov/pubs/trec25/papers/UPD_IA-DD.pdf"]}
{"year":"2017","title":"Event Coreference Resolution by Iteratively Unfolding Inter-dependencies among Events","authors":["PK Choubey, R Huang - arXiv preprint arXiv:1707.07344, 2017"],"snippet":"Page 1. Event Coreference Resolution by Iteratively Unfolding Inter-dependencies among Events Prafulla Kumar Choubey and Ruihong Huang Department of Computer Science and Engineering Texas A&M University (prafulla.choubey, huangrh)@tamu.edu Abstract ...","url":["https://arxiv.org/pdf/1707.07344"]}
{"year":"2017","title":"Explaining and Generalizing Skip-Gram through Exponential Family Principal Component Analysis","authors":["R Cotterell, A Poliak, B Van Durme, J Eisner"],"snippet":"... distributional information. The embeddings, trained on extremely large text corpora, eg, Wikipedia and the Common Crawl, are claimed to encode semantic knowledge extracted from large text corpora. While numerous models ...","url":["https://ryancotterell.github.io/papers/cotterell+alb.eacl17.pdf"]}
{"year":"2017","title":"Exploiting Embedding in Content-Based Recommender systems","authors":["Y Huang - 2016"],"snippet":"Page 1. Multimedia Computing Group Exploiting Embedding in Content-Based Recommender Systems Yanbo Huang Master of Science Thesis Page 2. Page 3. Exploiting Embedding in Content-Based Recommender Systems Master of Science Thesis ...","url":["http://repository.tudelft.nl/islandora/object/uuid:cbec7bdd-4bab-4132-93cd-359587b9bf46/datastream/OBJ/view"]}
{"year":"2017","title":"Exploring Neural Transducers for End-to-End Speech Recognition","authors":["E Battenberg, J Chen, R Child, A Coates, Y Gaur, Y Li… - arXiv preprint arXiv: …, 2017"],"snippet":"... available for this benchmark from the Kaldi receipe [20]. The language model used by all models in Table 3 is built from a sample of the common crawl dataset [26]. Model specification. All models in Tables 1 and 3 are tuned ...","url":["https://arxiv.org/pdf/1707.07413"]}
{"year":"2017","title":"Extending the Scope of Co-occurrence Embedding","authors":["J Mi, Y Wang, J Zhu"],"snippet":"... Therefore, it is highly likely that our model ignores the n-grams with strong emotion, simply because they rarely occur in the training data. We expect a boost in the classification accuracy if we could train our model on a more comprehensive dataset, say, common crawl...","url":["https://web.stanford.edu/class/cs224n/reports/2758144.pdf"]}
{"year":"2017","title":"Extracting Conceptual Relationships and Inducing Concept Lattices from Unstructured Text","authors":["VS Anoop, S Asharaf - Journal of Intelligent Systems"],"snippet":"AbstractConcept and relationship extraction from unstructured text data plays a key role in meaning aware computing paradigms, which make computers intelligent by helping them learn, interpret, and synthesis information. These concepts and relationships leverage knowledge ...","url":["https://www.degruyter.com/view/j/jisys.ahead-of-print/jisys-2017-0225/jisys-2017-0225.xml"]}
{"year":"2017","title":"Extracting Parallel Paragraphs from Common Crawl","authors":["J Kúdela, I Holubová, O Bojar - The Prague Bulletin of Mathematical Linguistics, 2017"],"snippet":"Abstract Most of the current methods for mining parallel texts from the web assume that web pages of web sites share same structure across languages. We believe that there still exists a non-negligible amount of parallel data spread across sources not satisfying this","url":["https://www.degruyter.com/downloadpdf/j/pralin.2017.107.issue-1/pralin-2017-0003/pralin-2017-0003.xml"]}
{"year":"2017","title":"Extracting Visual Knowledge from the Web with Multimodal Learning","authors":["D Gong, DZ Wang"],"snippet":"... 5.1 Dataset We evaluate our approach based on a collection of web pages and images derived from the Common Crawl dataset [Smith et al., 2013] that is publicly available on Amazon S3. The entire Common Crawl dataset ...","url":["http://static.ijcai.org/proceedings-2017/0238.pdf"]}
{"year":"2017","title":"Fast Construction of Compressed Web Graphs","authors":["J Broß, S Gog, M Hauck, M Paradies - … on String Processing and Information Retrieval, 2017"],"snippet":"... Table 1). For experiments on a very large graph, we added a web graph originating from the CommonCrawl project. Table 1. ... Graphs are stored as set of adjacency lists. Each list entry occupies 4 bytes (8 bytes in case of CommonCrawl). ...","url":["https://link.springer.com/chapter/10.1007/978-3-319-67428-5_11"]}
{"year":"2017","title":"FBK's Participation to the English-to-German News Translation Task of WMT 2017","authors":["MA Di Gangi, N Bertoldi, M Federico - WMT 2017, 2017"],"snippet":"... Number of training sentences. original cleaned commoncrawl 2399123 2228833 europarl-v7 1920209 1719859 news-comm-v12 270769 255944 rapid2016 1329041 1277997 both English and German. We also filtered out ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=295"]}
{"year":"2017","title":"FilteredWeb: A Framework for the Automated Search-Based Discovery of Blocked URLs","authors":["A Darer, O Farnan, J Wright - arXiv preprint arXiv:1704.07185, 2017"],"snippet":"... Web search is a large and complicated business; most engines do not simply rank pages based on hyperlinks, but rather current trends and activity. One alternative to Bing is Common Crawl – an open data project that scrapes the web for pages. ...","url":["https://arxiv.org/pdf/1704.07185"]}
{"year":"2017","title":"Findings of the 2017 conference on machine translation (wmt17)","authors":["O Bojar, R Chatterjee, C Federmann, Y Graham… - WMT 2017, 2017"],"snippet":"... Some training corpora were identical from last year (Europarl4, Common Crawl, SETIMES2, Russian-English parallel data provided by Yandex, Wikipedia Headlines provided by CMU) and some were updated (United Nations, CzEng v1. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=193"]}
{"year":"2017","title":"Findings of the WMT 2017 Biomedical Translation Shared Task","authors":["A Jimeno Yepes, A Neveol, M Neves, K Verspoor…","AJ Yepes, A Névéol, M Neves, HPIU Potsdam… - WMT 2017, 2017"],"snippet":"... used. Tuning of the SMT systems was performed with MERT. Commoncrawl and Wikipedia were used as general domain data for all language pairs ex- cept for EN/PT, where no Commoncrawl data was provided by WMT. As ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=258","https://www.research.ed.ac.uk/portal/files/40797681/123_1.pdf"]}
{"year":"2017","title":"From Segmentation to Analyses: A Probabilistic Model for Unsupervised Morphology Induction","authors":["T Bergmanis, S Goldwater"],"snippet":"... by our system and the MorphoChains baseline, we used word2vec (Mikolov et al., 2013) to train a Continuous Bag of Words model on a sub-sample of the Common Crawl (CC) corpus6 for ... 6Common Crawl http://commoncrawl.org 7Morpho Challenge 2010: http://research.ics. ...","url":["http://homepages.inf.ed.ac.uk/sgwater/papers/eacl17-morphAnalyses.pdf"]}
{"year":"2017","title":"Game, Set, Match-LSTM: Question Answering on SQUaD","authors":["I Torres, E Ehizokhale"],"snippet":"... The official test dataset is not publically available. It's kept by the authors of SQuAD to make model evaluation fair. We use GloVE word vectors trained on the 840B Common Crawl corpus. We limit the max context paragraph length to 300 and the max question length to 30. ...","url":["https://web.stanford.edu/class/cs224n/reports/2761956.pdf"]}
{"year":"2017","title":"Geographical Evaluation of Word Embeddings","authors":["M Konkol, T Brychcín, M Nykl, T Hercig - Proceedings of the Eighth International Joint …, 2017"],"snippet":"We use two models provided by the authors of the model trained on Wikipedia and News Crawl (LexVec - w + nc), and Common Crawl (LexVec - cc). MetaEmbeddings is an ensemble method that combines several embeddings (Yin and Schütze, 2016) …","url":["http://www.aclweb.org/anthology/I17-1023"]}
{"year":"2017","title":"Global-Context Neural Machine Translation through Target-Side Attentive Residual Connections","authors":["L Miculicich, N Pappas, D Ram, A Popescu-Belis - arXiv preprint arXiv:1709.04849, 2017"],"snippet":"... Finally, we use the complete English- to-German set from WMT 2016 (Bojar and others 2016)3 which includes Europarl v7, Common Crawl, and News Commentary v11 with a total of ca. 4.5 million sentence pairs. ... N-gram counts and language models from the common crawl...","url":["https://arxiv.org/pdf/1709.04849"]}
{"year":"2017","title":"Globally Normalized Reader","authors":["J Raiman, J Miller - Proceedings of the 2017 Conference on Empirical …, 2017"],"snippet":"... tion. The hidden dimension of all recurrent layers is 200. We use the 300 dimensional 8.4B token Common Crawl GloVe vectors (Pennington et al., 2014). Words missing from the Common Crawl vocabulary are set to zero. In ...","url":["http://www.aclweb.org/anthology/D17-1112"]}
{"year":"2017","title":"Googleology as smart lexicography: Big messy data for better regional labels","authors":["S Dollinger - Dictionaries: Journal of the Dictionary Society of North …, 2016"],"snippet":"... help. Other services have other problems. Commoncrawl.org is one of the longest-running such projects and [End Page 72] offers big data for free. Accessing its data, however, requires serious programming expertise. Other ...","url":["https://muse.jhu.edu/article/645766/summary"]}
{"year":"2017","title":"Grammatical error correction in non-native English","authors":["Z Yuan - 2017"],"snippet":"Page 1. Technical Report Number 904 Computer Laboratory UCAM-CL-TR-904 ISSN 1476-2986 Grammatical error correction in non-native English Zheng Yuan March 2017 15 JJ Thomson Avenue Cambridge CB3 0FD United Kingdom phone +44 1223 763500 ...","url":["http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-904.pdf"]}
{"year":"2017","title":"Handling Homographs in Neural Machine Translation","authors":["F Liu, H Lu, G Neubig - arXiv preprint arXiv:1708.06510, 2017"],"snippet":"... side. For German and French, we use a combination of Europarl v7, Common Crawl, and News Commentary as training set. For development set, newstest2013 is used for German and newstest2012 is used for French. For ...","url":["https://arxiv.org/pdf/1708.06510"]}
{"year":"2017","title":"HCTI at SemEval-2017 Task 1: Use convolutional neural network to evaluate Semantic Textual Similarity","authors":["S Yang"],"snippet":"... 1) All punctuations are removed. 2) All words are lower-cased. 3) All sentences are tokenized by Natural Language Toolkit (NLTK) (Bird et al., 2009). 4) All words are replaced by pre-trained GloVe word vectors (Common Crawl, 840B tokens) (Pennington et al., 2014). ...","url":["http://nlp.arizona.edu/SemEval-2017/pdf/SemEval016.pdf"]}
{"year":"2017","title":"Hungarian Layer: Logics Empowered Neural Architecture","authors":["H Xiao, L Meng - arXiv preprint arXiv:1712.02555, 2017"],"snippet":"… for illustration. 4.1. Experimental Setting We initialize the word embedding with 300-dimensional GloVe (Pennington et al., 2014) word vectors pre-trained in the 840B Common Crawl corpus (Pennington et al., 2014). For the …","url":["https://arxiv.org/pdf/1712.02555"]}
{"year":"2017","title":"Hunter MT: A Course for Young Researchers in WMT17","authors":["J Xu, YZ Kuang, S Baijoo, H Lee, U Shahzad, M Ahmed… - WMT 2017, 2017"],"snippet":"... ing and tuning, including Europarl v7, News Commentary v12, Rapid Corpus of EU press releases, and parts of the Common Crawl corpus ... Rapid News Test 2016 2 German-English News 33.61 News Test 2016 3 English-Czech News 13.59 Europarl, CommonCrawl, News' 12 ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=446"]}
{"year":"2017","title":"IIIT-H at IJCNLP-2017 Task 4: Customer Feedback Analysis using Machine Learning and Neural Network Approaches","authors":["P Danda, P Mishra, S Kanneganti, S Lanka - … of the IJCNLP 2017, Shared Tasks, 2017"],"snippet":"… We used glove pre-trained embeddings1 (Pennington et al., 2014) for English while for the rest 1we used Common Crawl corpus with 840B tokens, 2.2M vocab, case-sensitive, 300-dimensional vectors available on https://nlp.stanford.edu/projects/glove/ 155 Page 2 …","url":["http://www.aclweb.org/anthology/I17-4026"]}
{"year":"2017","title":"IITP at EmoInt-2017: Measuring Intensity of Emotions using Sentence Embeddings and Optimized Features","authors":["MS Akhtar, P Sawant, A Ekbal, J Pawar… - EMNLP 2017, 2017"],"snippet":"... For this task, we use GloVe (Pennington et al., 2014) pre-trained word embedding trained on common crawl corpus. ... The choice of common crawl word embeddings for Twitter datasets is because of the normalization steps (Section 2.1). ...","url":["http://www.aclweb.org/anthology/W/W17/W17-52.pdf#page=228"]}
{"year":"2017","title":"IITP at SemEval-2017 Task 5: An Ensemble of Deep Learning and Feature Based Models for Financial Sentiment Analysis","authors":["D Ghosal, S Bhatnagar, MS Akhtar, A Ekbal…"],"snippet":"... billion and 400 million tweets respectively. For news headline we used GloVe common crawl model trained on 802 billion words and Word2Vec Google News model (Mikolov et al., 2013). We experimented with 200, 300 and ...","url":["https://www.aclweb.org/anthology/S/S17/S17-2154.pdf"]}
{"year":"2017","title":"Implementation and Analysis of Match-LSTM for SQuAD","authors":["M Graczyk"],"snippet":"... We used the Common Crawl 840B Glove vectors with 300 dimensions as our input embedding, and like the paper we did not train these vectors. Finally, we used softsign instead of tanh for a substantial improvement in training performance with no obvious change in metrics. ...","url":["https://web.stanford.edu/class/cs224n/reports/2761882.pdf"]}
{"year":"2017","title":"Implementation and Improvement of Match-LSTM in Question-Answering System","authors":["Y Zhang, H Peng"],"snippet":"... To initialize the word vector embeddings, we used the GloVe word embeddings of dimensionality o = 300 and vocabulary size of 2.2M that have been pre-trained on Common Crawl. We did not train the word embeddings, since our dataset is not very large. ...","url":["https://web.stanford.edu/class/cs224n/reports/2748656.pdf"]}
{"year":"2017","title":"Improving Machine Translation Quality Estimation with Neural Network Features","authors":["Z Chen, Y Tan, C Zhang, Q Xiang, L Zhang, M Li… - WMT 2017, 2017"],"snippet":"... To train the word embedding and the RNNLM, the source side and the target side of the bilingual parallel corpus for the translation task, publicly re- leased by the WMT evaluation campaign, are used; they include Europarl v7, Common Crawl corpus, News Commentary v8 and ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=575"]}
{"year":"2017","title":"Improving the Compositionality of Word Embeddings","authors":["MJ Scheepers - 2017"],"snippet":"… 2Real, natural and imaginary numbers are represented in computers as floating point numbers [49], which are not always exact but more often really close estimations of these numbers. 3The Common Crawl dataset can be found at: http://commoncrawl.org. Page 10. 4 …","url":["https://thijs.ai/papers/scheepers-msc-thesis-2017-improving-compositionality-word-embeddings.pdf"]}
{"year":"2017","title":"Induction of Latent Domains in Heterogeneous Corpora: A Case Study of Word Alignment","authors":["H Cuong, K Sima'an"],"snippet":"... resulting SMT systems. Going beyond the findings, we surmise that virtually any large corpus (eg Europarl, Hansards, Common Crawl) harbors an arbitrary diversity of hidden domains, unknown in advance. We address the ...","url":["https://staff.fnwi.uva.nl/c.hoang/mt20171.pdf"]}
{"year":"2017","title":"Inductive Representation Learning on Large Graphs","authors":["WL Hamilton, R Ying, J Leskovec - arXiv preprint arXiv:1706.02216, 2017"],"snippet":"... For features, we use off-the-shelf 300-dimensional GloVe CommonCrawl word vectors [25]; for each post, we concatenated (i) the average embedding of the post title, (ii) the average embedding of all the post's comments (iii) the post's score, and (iv) the number of comments ...","url":["https://arxiv.org/pdf/1706.02216"]}
{"year":"2017","title":"Information Extraction meets the Semantic Web: A Survey","authors":["JL Martinez-Rodriguez, A Hogan, I Lopez-Arevalo"],"snippet":"Web [234]. Mika referred to this as the semantic gap [193], whereby the demand for structured data on the Web outstrips its supply. For example, in analysis of the 2013 Common Crawl dataset, Meusel et al. [189] found that …","url":["http://www.semantic-web-journal.net/system/files/swj1744.pdf"]}
{"year":"2017","title":"Integrating Knowledge from Latent and Explicit Features for Triple Scoring","authors":["LW Chen, B Mangipudi, J Bandlamudi, R Sehgal"],"snippet":"... ford NLP Group. We use word embeddings of size 300 dimensions, which were pre-trained on the Common Crawl corpus 2. We integrate the learned vector representations of GloVe for nationality and profession. In a nutshell ...","url":["http://www.uni-weimar.de/medien/webis/events/wsdm-cup-17/wsdmcup17-papers-final/wsdmcup17-triple-scoring/chen17-notebook.pdf"]}
{"year":"2017","title":"Interstitial Content Detection","authors":["E Lucas - arXiv preprint arXiv:1708.04879, 2017"],"snippet":"... 'http://servo.org'. [7] I. Kreymer. Announcing the common crawl index, April 2015. 'http://commoncrawl. org/2015/04/announcing-the-common-crawl-index/'. [8] S. Rasheed, A. Naeem, and O. Ishaq. Automated number plate recognition using hough lines and template ...","url":["https://arxiv.org/pdf/1708.04879"]}
{"year":"2017","title":"ISSUES IN HUMAN AND AUTOMATIC TRANSLATION QUALITY ASSESSMENT","authors":["S Doherty"],"snippet":"... 2013; Bojar et al. 2014). These campaigns typically involve the provision of existing corpora (eg Europarl, News Commentary, Common Crawl, Gigaword, Wiki Headlines, and the UN Corpus) as well as WMT-commissioned translations with Page 6. 6 ...","url":["https://www.researchgate.net/profile/Stephen_Doherty3/publication/314261771_Issues_in_human_and_automatic_translation_quality_assessment/links/58bea025458515dcd28defdd/Issues-in-human-and-automatic-translation-quality-assessment.pdf"]}
{"year":"2017","title":"Iterative Attention Network for Question Answering","authors":["T Henighan"],"snippet":"... 3.6 future studies In this work only the 100-dimensional GloVe vectors trained on 6 billion tokens was used. It may be useful to try the 300-dimensional vectors trained over a 840 billion common-crawl corpus, which 6 Page 7. ...","url":["http://www.tomhenighan.com/pdfs/iterative-attention-network.pdf"]}
{"year":"2017","title":"Joint Learning of Structural and Textual Features for Web Scale Event Extraction","authors":["J Wiedmann - 2017"],"snippet":"... In a second expansion step, this seed data set is further extended automatically by identifying single event pages in the Common Crawl, a repository of crawled web data, based on Microdata annotations and the annotations derived from the seed data. ...","url":["http://www.cs.ox.ac.uk/files/8846/aaai17-wiedmann-eventextraction.pdf"]}
{"year":"2017","title":"Joint Training for Pivot-based Neural Machine Translation","authors":["Y Cheng, Q Yang, Y Liu, M Sun, W Xu"],"snippet":"... sets. The evaluation metric is case-insensitive BLEU [Papineni et al., 2002] as calculated by the multi-bleu.perl script. The WMT corpus is composed of the Common Crawl, News Commentary, Europarl v7 and UN corpora. The ...","url":["http://nlp.csai.tsinghua.edu.cn/~ly/papers/ijcai2017_cy.pdf"]}
{"year":"2017","title":"Killing Two Birds with One Stone: Malicious Domain Detection with High Accuracy and Coverage","authors":["I Khalil, B Guan, M Nabeel, T Yu - arXiv preprint arXiv:1711.00300, 2017"],"snippet":"Page 1. Killing Two Birds with One Stone: Malicious Domain Detection with High Accuracy and Coverage Issa Khalil, Bei Guan, Mohamed Nabeel, Ting Yu Qatar Computing Research Institute {ikhalil,bguan,mnabeel,tyu}@hbku.edu.qa ...","url":["https://arxiv.org/pdf/1711.00300"]}
{"year":"2017","title":"LDOW2017: 10th Workshop on Linked Data on the Web","authors":["J Lehmann, S Auer, S Capadisli, K Janowicz, C Bizer… - Proceedings of the 26th …, 2017"],"snippet":"... Wikidata, a collaborative knowledge-base designed to complement Wikipedia; LinkedGeoData, providing a structureddata export from OpenStreetMap; and Web Data Commons, collecting embedded meta-data extracted from billions of webpages found in the Common Crawl ...","url":["http://aidanhogan.com/docs/ldow2017.pdf"]}
{"year":"2017","title":"Learned in Translation: Contextualized Word Vectors","authors":["B McCann, J Bradbury, C Xiong, R Socher - arXiv preprint arXiv:1708.00107, 2017"],"snippet":"... When training an MT-LSTM, we used fixed 300-dimensional word vectors. We used the CommonCrawl-840B GloVe model for English word vectors, which were completely fixed during training, so that the MT-LSTM had to learn how to use the pretrained vectors for translation. ...","url":["https://arxiv.org/pdf/1708.00107"]}
{"year":"2017","title":"Learning bilingual word embeddings with (almost) no bilingual data","authors":["MAGLE Agirre"],"snippet":"... and Italian. Given that Finnish is not in- cluded in this collection, we used the 2.8 billion word Common Crawl corpus provided at WMT 20164 instead, which we tokenized using the Stanford Tokenizer (Manning et al., 2014). In ...","url":["http://www.aclweb.org/anthology/P/P17/P17-1042.pdf"]}
{"year":"2017","title":"Learning Paraphrastic Sentence Embeddings from Back-Translated Bitext","authors":["J Wieting, J Mallinson, K Gimpel - arXiv preprint arXiv:1706.01847, 2017"],"snippet":"... EN, FREN, and DEEN, respectively). The training data included: Eu- roparl v7 (Koehn, 2005), the Common Crawl corpus, the UN corpus (Eisele and Chen, 2010), News Commentary v10, the 109 French-English corpus, ...","url":["https://arxiv.org/pdf/1706.01847"]}
{"year":"2017","title":"Learning to Predict: A Fast Re-constructive Method to Generate Multimodal Embeddings","authors":["G Collell, T Zhang, MF Moens - arXiv preprint arXiv:1703.08737, 2017"],"snippet":"... 4 Experimental setup 4.1 Word embeddings We use 300-dimensional GloVe1 vectors [19] pre-trained on the Common Crawl corpus consisting of 840B tokens and a 2.2M words vocabulary. 4.2 Visual data and features We use ImageNet [17] as our source of labeled images. ...","url":["https://arxiv.org/pdf/1703.08737"]}
{"year":"2017","title":"Learning to select data for transfer learning with Bayesian Optimization","authors":["S Ruder, B Plank - arXiv preprint arXiv:1707.05246, 2017"],"snippet":"... We train an LDA model (Blei et al., 2003) with 50 topics and 10 iterations for topic distribution-based representations and use GloVe embeddings (Pennington et al., 2014) trained on 42B tokens of Common Crawl data6 for word embedding-based representations. ...","url":["https://arxiv.org/pdf/1707.05246"]}
{"year":"2017","title":"Length, Interchangeability, and External Knowledge: Observations from Predicting Argument Convincingness","authors":["P Potash, R Bhattacharya, A Rumshisky"],"snippet":"... and create the appropriate representation. For the embedding representation, we use GloVe (Pennington et al., 2014) 300 dimensions learned from the Common Crawl corpus with 840 billion tokens. Our Wikipedia data is from ...","url":["https://pdfs.semanticscholar.org/9785/f21ac0b33689dc3ae711a94383eda01785e9.pdf"]}
{"year":"2017","title":"Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search","authors":["C Hokamp, Q Liu - arXiv preprint arXiv:1704.07138, 2017"],"snippet":"Page 1. Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search Chris Hokamp ADAPT Centre Dublin City University [email protected] Qun Liu ADAPT Centre Dublin City University [email protected] Abstract ...","url":["https://arxiv.org/pdf/1704.07138"]}
{"year":"2017","title":"LIG-CRIStAL Submission for the WMT 2017 Automatic Post-Editing Task","authors":["A Berard, L Besacier, O Pietquin - Proceedings of the Second Conference on …, 2017"],"snippet":"... To mitigate this, we decided to limit our use of external data to monolingual English (commoncrawl). ... PE side Similarly to Junczys-Dowmunt and Grundkiewicz (2016) we first performed a coarse filtering of well-formed sentences of commoncrawl...","url":["http://www.aclweb.org/anthology/W17-4772"]}
{"year":"2017","title":"LIG-CRIStAL System for the WMT17 Automatic Post-Editing Task","authors":["A Berard, O Pietquin, L Besacier - arXiv preprint arXiv:1707.05118, 2017"],"snippet":"... To mitigate this, we decided to limit our use of external data to monolingual English (commoncrawl). ... PE side Similarly to Junczys-Dowmunt and Grundkiewicz (2016) we first performed a coarse filtering of well-formed sentences of commoncrawl...","url":["https://arxiv.org/pdf/1707.05118"]}
{"year":"2017","title":"LIMSI submission for WMT'17 shared task on bandit learning","authors":["G Wisniewski - WMT 2017, 2017"],"snippet":"... At the end, our monolingual corpus contain 193292548 sentences. The translation model is estimated from the CommonCrawl, NewsCo, Europarl and Rapid corpora, resulting in a parallel corpus made of 5919142 sentences. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=698"]}
{"year":"2017","title":"Linked Data is People: Building a Knowledge Graph to Reshape the Library Staff Directory","authors":["JA Clark, SWH Young"],"snippet":"... RDFa and JSON-LD are two other syntax encodings that enable structured data in HTML pages. In their analysis of structured data in the Common Crawl dataset, Bizer et al. (2013) note that the growth of machine-readable descriptions of web content continues to grow. ...","url":["http://journal.code4lib.org/articles/12320"]}
{"year":"2017","title":"LMU Munich's Neural Machine Translation Systems for News Articles and Health Information Texts","authors":["M Huck, F Braune, A Fraser"],"snippet":"... 1. Adding the News Commentary (NC) and Common Crawl (CC) parallel training data as provided for WMT17 by the organizers of the news translation shared task. We initialize the optimization on the larger corpus with the Europarl-trained baseline model. ...","url":["http://www.cis.uni-muenchen.de/~fraser/pubs/huck_wmt2017_system.pdf"]}
{"year":"2017","title":"Lump at SemEval-2017 Task 1: Towards an Interlingua Semantic Similarity","authors":["C España-Bonet, A Barrón-Cedeño - Proceedings of the 11th International Workshop …, 2017"],"snippet":"... 2http://commoncrawl.org/ 3http://www.casmacat.eu/corpus/ news-commentary.html 4https://sites.google.com/site/ iwsltevaluation2016/mt-track/ 5We built a version of the lemma translator with an extra language: Babel synsets (cf. ...","url":["http://www.aclweb.org/anthology/S17-2019"]}
{"year":"2017","title":"Machine Comprehension Using Multi-Perspective Context Matching and Co-Attention","authors":["A Bajenov, T Gupta"],"snippet":"... Embedding Layer Trainable or fixed embeddings Dropout post-embedding layer Gigaword (6B) or Common Crawl (840B) corpus Architecture Choices Number of layers Type of Layers Representation Sizes embedding size (100-300) lstm units (100-200) perspective units ...","url":["http://web.stanford.edu/class/cs224n/reports/2758309.pdf"]}
{"year":"2017","title":"Machine Comprehension with MMLSTM and Clustering","authors":["T Romero, Z Barnes, F Cipollone"],"snippet":"... 1.2 Data Our model is trained on the SQuAD dataset [1]. We split the SQuAD dataset up into 82k training questions, 5k validation questions, and 10k dev questions. The set of test questions is withheld. We use 300d Common Crawl GloVe [2] vectors for our word embeddings. ...","url":["https://web.stanford.edu/class/cs224n/reports/2761209.pdf"]}
{"year":"2017","title":"Machine Question and Answering","authors":["J Chang, M Jiang, D Le"],"snippet":"... 5 Page 6. were also initialized with 300-dimensional GloVe word vectors from the 840B Common Crawl corpus (Pennington et al., 2014). The above plots illustrate the cross entropy loss (left) and F1 score (right) vs epoch and clearly show overfitting. ...","url":["https://web.stanford.edu/class/cs224n/reports/2761996.pdf"]}
{"year":"2017","title":"Machine Translation Evaluation with Neural Networks","authors":["F Guzmán, S Joty, L Màrquez, P Nakov - Computer Speech & Language, 2016"],"snippet":"We present a framework for machine translation evaluation using neural networks in a pairwise setting, where the goal is to select the better translation from a.","url":["http://www.sciencedirect.com/science/article/pii/S0885230816301693"]}
{"year":"2017","title":"Machine Translation: Phrase-Based, Rule-Based and Neural Approaches with Linguistic Evaluation","authors":["V Macketanz, E Avramidis, A Burchardt, J Helcl… - Cybernetics and Information …, 2017"],"snippet":"... difference). The generic parallel training data (Europarl [18], News Commentary, MultiUN [19], Commoncrawl [20]) are augmented with domain-specific data from the IT domain (Libreoffice, Ubuntu, Chromium Browser [21]). ...","url":["https://www.degruyter.com/downloadpdf/j/cait.2017.17.issue-2/cait-2017-0014/cait-2017-0014.xml"]}
{"year":"2017","title":"Massive Exploration of Neural Machine Translation Architectures","authors":["D Britz, A Goldie, T Luong, Q Le - arXiv preprint arXiv:1703.03906, 2017"],"snippet":"... 3 Experimental Setup 3.1 Datasets and Preprocessing We run all experiments on the WMT'15 English→German task consisting of 4.5M sentence pairs, obtained by combining the Europarl v7, News Commentary v10, and Common Crawl corpora. ...","url":["https://arxiv.org/pdf/1703.03906"]}
{"year":"2017","title":"Matching Web Tables To DBpedia-A Feature Utility Study","authors":["D Ritze, C Bizer - context, 2017"],"snippet":"... 215 Page 7. has been extracted from the CommonCrawl web corpus3. ... 3http://commoncrawl org/ Table 3 shows the results of the correlation analysis for the property and instance similarity matrices regarding precision, eg PPstdev, and recall, eg RPstdev. ...","url":["https://openproceedings.org/2017/conf/edbt/paper-148.pdf"]}
{"year":"2017","title":"Methods of sentence extraction, abstraction and ordering for automatic text summarization","authors":["MT Nayeem - 2017"],"snippet":"Page 1. METHODS OF SENTENCE EXTRACTION, ABSTRACTION AND ORDERING FOR AUTOMATIC TEXT SUMMARIZATION MIR TAFSEER NAYEEM Bachelor of Science, Islamic University of Technology, 2011 A Thesis …","url":["https://www.uleth.ca/dspace/bitstream/handle/10133/4993/NAYEEM_MIR_TAFSEER_MSC_2017.pdf?sequence=1"]}
{"year":"2017","title":"Modeling Target-Side Inflection in Neural Machine Translation","authors":["A Tamchyna, MWD Marco, A Fraser - arXiv preprint arXiv:1707.06012, 2017"],"snippet":"Page 1. Modeling Target-Side Inflection in Neural Machine Translation Aleš Tamchyna1,2 and Marion Weller-Di Marco1,3 and Alexander Fraser1 1LMU Munich, 2Memsource, 3University of Stuttgart ales.tamchyna@memsource ...","url":["https://arxiv.org/pdf/1707.06012"]}
{"year":"2017","title":"Modeling the Dynamic Framing of Controversial Topics in Online Communities","authors":["J Mendelsohn"],"snippet":"... After preprocessing, each post is a bag-of-words of variable length: p (n) ij = [w1,w2, ..., wl]. Each word in a post is represented by its 300-dimensional GloVe vector, trained on Common Crawl data (CITE GLOVE). Posts are then represented as the average of each word's vector. ...","url":["http://web.stanford.edu/class/cs224n/reports/2761128.pdf"]}
{"year":"2017","title":"Monotasks: Architecting for Performance Clarity in Data Analytics Frameworks","authors":["K Ousterhout, C Canel, S Ratnasamy, S Shenker - 2017"],"snippet":"Page 1. Monotasks: Architecting for Performance Clarity in Data Analytics Frameworks Kay Ousterhout UC Berkeley Christopher CanelCarnegie Mellon University Sylvia Ratnasamy UC Berkeley Scott Shenker UC Berkeley, ICSI ...","url":["http://kayousterhout.org/publications/sosp17-final183.pdf"]}
{"year":"2017","title":"Multi-channel Encoder for Neural Machine Translation","authors":["H Xiong, Z He, X Hu, H Wu - arXiv preprint arXiv:1712.02109, 2017"],"snippet":"WMT'14 English-French. We use the full WMT' 14 parallel corpus as our training data. The detailed data sets are Europarl v7, Common Crawl, UN, News Commentary, Gi- gaword. In total, it includes 36 million sentence pairs …","url":["https://arxiv.org/pdf/1712.02109"]}
{"year":"2017","title":"Multi-Domain Neural Machine Translation through Unsupervised Adaptation","authors":["MA Farajian, M Turchi, M Negri, M Federico"],"snippet":"... PHP, Ubuntu, and translated UN documents (UN-TM).2 Since the size of these corpora is relatively small for training robust MT systems, in particular NMT solutions, we added the News Commentary data from WMT'133(WMT nc), as well as the CommonCrawl (CommonC.) and ...","url":["https://hermessvn.fbk.eu/svn/hermes/open/federico/papers/Amin_et.al-wmt2017.pdf"]}
{"year":"2017","title":"Multimodal Learning for Web Information Extraction","authors":["D Gong, DZ Wang, Y Peng - ACM International Conference​​ on​​ Multimedia, 2017"],"snippet":"… Collecting image corpus. The image corpus is not included in the Common Crawl data [25] where we derived text corpus … 5.1.2 Corpus. We derive our text and image corpus based on the Common Crawl dataset [25] that is publicly available on Amazon S3 …","url":["https://pdfs.semanticscholar.org/d2a5/815007832255a033759d25d771157ae9be16.pdf"]}
{"year":"2017","title":"Multimodal sentiment analysis with word-level fusion and reinforcement learning","authors":["M Chen, S Wang, PP Liang, T Baltrušaitis, A Zadeh… - Proceedings of the 19th …, 2017"],"snippet":"… For text inputs, we use pre-trained word embeddings (glove.840B.300d) [19] to convert the transcripts of videos in the CMU-MOSI dataset into word vectors. This is a 300 dimensional word embedding trained on 840 billion tokens from the common crawl dataset …","url":["http://dl.acm.org/citation.cfm?id=3136755.3136801"]}
{"year":"2017","title":"Multiple Turn Comprehension for the Bi-Directional Attention Flow Model","authors":["T Liu"],"snippet":"... The word embedding layer converts each word in the context and question into a dense vector word representation. We use the pre-trained GloVe (Pennington et al., 2014) vectors for this layer, in particular the Common Crawl 840B tokens, 300d vectors. ...","url":["http://web.stanford.edu/class/cs224n/reports/2761890.pdf"]}
{"year":"2017","title":"Named Entity Recognition in Twitter using Images and Text","authors":["D Esteves, R Peres, J Lehmann, G Napolitano"],"snippet":"... A disadvantage when using web search engines is that they are not open and free. This can be circumvented by indexing and searching on other large sources of information, such as Common Crawl and Flickr11. ... 11 http://commoncrawl.org/ and https://www.flickr.com/ Page 7. 7 ...","url":["https://www.researchgate.net/profile/Diego_Esteves/publication/317721565_Named_Entity_Recognition_in_Twitter_using_Images_and_Text/links/594a85dda6fdcc89090cb5f5/Named-Entity-Recognition-in-Twitter-using-Images-and-Text.pdf"]}
{"year":"2017","title":"Native Language Identification from i-vectors and Speech Transcriptions","authors":["B Ulmer, A Zhao, N Walsh"],"snippet":"... word (Pennington et al., 2014). The GloVe embeddings of words came from the Common Crawl 42B to- kens collection, and the 300 dimensional embeddings were used (Pennington et al., 2014). If no corresponding GloVe ...","url":["http://web.stanford.edu/class/cs224s/reports/Ben_Ulmer.pdf"]}
{"year":"2017","title":"Natural Language Question-Answering using Deep Learning","authors":["B Liu, F Lyu, R Roy"],"snippet":"... We experimented with both fixed 193 CommonCrawl.840B.300d pretrained word vectors and GLoVE.6B.100d pretrained word 194 vectors (Pennington, Socher, & Manning, 2015) 195 We enforce a fixed question length of 22 words, and fixed context length of 300 words. ...","url":["https://pdfs.semanticscholar.org/505a/ed7c751eb57bf5e59ab1cedc49448376b7d5.pdf"]}
{"year":"2017","title":"Neural Lie Detection with the CSC Deceptive Speech Dataset","authors":["S Desai, M Siegelman, Z Maurer"],"snippet":"... Each acoustic feature frame was 34 dimensional and each speaker-dependent frame was 68 dimensional. Lexical features were encoded using GloVe Wikipedia and CommonCrawl 100-dimensional embeddings[9] based on the transcripts provided with the dataset. ...","url":["http://web.stanford.edu/class/cs224s/reports/Shloka_Desai.pdf"]}
{"year":"2017","title":"Neural Machine Translation Leveraging Phrase-based Models in a Hybrid Search","authors":["L Dahlmann, E Matusov, P Petrushkov, S Khadivi - arXiv preprint arXiv:1708.03271, 2017"],"snippet":"... For development and test sets, two reference translations are used. The German→English system is trained on parallel corpora provided for the constrained WMT 2017 evaluation (Europarl, Common Crawl, and others). We ...","url":["https://arxiv.org/pdf/1708.03271"]}
{"year":"2017","title":"Neural Machine Translation Training in a Multi-Domain Scenario","authors":["H Sajjad, N Durrani, F Dalvi, Y Belinkov, S Vogel - arXiv preprint arXiv:1708.08712, 2017","HSNDF Dalvi, Y Belinkov, S Vogel"],"snippet":"... For German-English, we use the Europarl (EP), and the Common Crawl (CC) corpora made available for the 1st Conference on Statistical Machine Translation2 as out- of-domain corpus. ... EP = Europarl, CC = Common Crawl, UN = United Nations. ...","url":["https://arxiv.org/pdf/1708.08712","https://www.researchgate.net/profile/Nadir_Durrani/publication/319349687_Neural_Machine_Translation_Training_in_a_Multi-Domain_Scenario/links/59d0f2a3aca2721f43673f75/Neural-Machine-Translation-Training-in-a-Multi-Domain-Scenario.pdf"]}
{"year":"2017","title":"Neural Machine Translation with LSTM's","authors":["J Dhaliwal"],"snippet":"... 3. dev08 11 - old dev dat from 2008 to 2011 (0.3M) 4. crawl - data from common crawl (90M) 5. ccb2 - 109 parallel corpus (81M) ... 3. dev08 11 - old dev dat from 2008 to 2011 (0.3M) 4. crawl - data from common crawl (90M) 5. ccb2 pc30109 parallel corpus (81M) ...","url":["https://people.umass.edu/~jdhaliwal/files/s2s.pdf"]}
{"year":"2017","title":"Neural Networks and Spelling Features for Native Language Identification","authors":["J Bjerva, G Grigonyte, R Ostling, B Plank - Bronze Sponsors, 2017"],"snippet":"... PoS tags are represented by 64-dimensional embeddings, initialised randomly; word tokens by 300-dimensional embeddings, initialised with GloVe (Pennington et al., 2014) em- beddings trained on 840 billion words of English web data from the Common Crawl project. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-50.pdf#page=255"]}
{"year":"2017","title":"Neural vs. Phrase-Based Machine Translation in a Multi-Domain Scenario","authors":["MA Farajian, M Turchi, M Negri, N Bertoldi, M Federico - EACL 2017, 2017"],"snippet":"... K PHP 38.4 K 259.0 K 9.7 K Ubuntu 9.0 K 47.7 K 8.6 K UN-TM 40.3 K 913.8 K 12.5 K CommonCrawl 2.6 M ... in particular NMT solutions, we used CommonCrawl and Europarl corpora as out-domain data in addition to the above-mentioned domain-specific corpora, resulting in ...","url":["http://www.aclweb.org/anthology/E/E17/E17-2.pdf#page=312"]}
{"year":"2017","title":"New Word Pair Level Embeddings to Improve Word Pair Similarity","authors":["A Shaukat, N Khan"],"snippet":"... Many previous approaches present embeddings for individual words [14, 15, 16, 27] using their distributional semantics (Common Crawl corpus1) and structured knowledge from ConceptNet and PPDB [31]. ... Figure 1 shows 1 http://commoncrawl.org/ ...","url":["http://faculty.pucit.edu.pk/nazarkhan/work/wps/wpe_icdar_wml17.pdf"]}
{"year":"2017","title":"NEWSQA: AMachine COMPREHENSION DATASET","authors":["A Trischler, T Wang, X Yuan, J Harris, A Sordoni"],"snippet":"... Both mLSTM and BARB are implemented with the Keras framework (Chollet, 2015) using the Theano (Bergstra et al., 2010) backend. Word embeddings are initialized using GloVe vectors (Pennington et al., 2014) pre-trained on the 840-billion Common Crawl corpus. ...","url":["https://www.openreview.net/pdf?id=ry3iBFqgl"]}
{"year":"2017","title":"NoSQL Web Crawler Application","authors":["GC Deka - Advances in Computers, 2017"],"snippet":"With the advent of Web technology, the Web is full of unstructured data called Big Data. However, these data are not easy to collect, access, and process at lar.","url":["http://www.sciencedirect.com/science/article/pii/S0065245817300323"]}
{"year":"2017","title":"Novel Ranking-Based Lexical Similarity Measure for Word Embedding","authors":["J Dutkiewicz, C Jędrzejek - arXiv preprint arXiv:1712.08439, 2017"],"snippet":"4.1 Experimental setup We use the unmodified vector space model trained on 840 billion words from Common Crawl data with the GloVe algorithm introduced in Pennington et al. (2014). The model consists of 2.2 million unique vectors; Each vector consists of 300 components …","url":["https://arxiv.org/pdf/1712.08439"]}
{"year":"2017","title":"NRC Machine Translation System for WMT 2017","authors":["C Lo, S Larkin, B Chen, D Stewart, C Cherry, R Kuhn… - WMT 2017, 2017"],"snippet":"... 2 Russian-English news translation We used all the Russian-English parallel corpora available for the constrained news translation task. They include the CommonCrawl corpus, the NewsCommentary v12 corpus, the Yandex corpus and the Wikipedia headlines corpus. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=354"]}
{"year":"2017","title":"On the Effective Use of Pretraining for Natural Language Inference","authors":["I Cases, MT Luong, C Potts - arXiv preprint arXiv:1710.02076, 2017"],"snippet":"... a 1We used the publicly released embeddings, trained with Common Crawl 840B tokens for GloVe (http:// nlp.stanford.edu/projects/glove/) and Google News 42B for word2vec https://code.google.com/ archive/p/word2vec/. Although ...","url":["https://arxiv.org/pdf/1710.02076"]}
{"year":"2017","title":"Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks","authors":["N Reimers, I Gurevych - arXiv preprint arXiv:1707.06799, 2017","NRI Gurevych"],"snippet":"... (2014) trained either on Wikipedia 2014 + Gigaword 5 (about 6 billion tokens) or on Common Crawl (about 840 billion tokens), and the Komninos and Manandhar (2016) embeddings11 trained on the Wikipedia August 2015 dump (about 2 billion tokens). ...","url":["https://arxiv.org/pdf/1707.06799","https://www.arxiv-vanity.com/papers/1707.06799v2/"]}
{"year":"2017","title":"Parallel Training Data Selection for Conversational Machine Translation","authors":["X Niu, M Carpuat"],"snippet":"... Corpus # Sentences # Words (en/fr) OpenSubtitles 33.5 M 284.0 M / 268.3 M MultiUN 13.2 M 367.1 M / 432.3 M Common Crawl 3.2 M 81.1 M / 91.3 M Europarl v7 2.0 M 55.7 M / 61.9 M Wikipedia 396 k 9.7 M / 8.7 M TED corpus 207 k 4.5 M / 4.8 M News Commentary v10 199 k ...","url":["https://pdfs.semanticscholar.org/fdf6/ae86229f51893dd6e33579511489af4a5eb7.pdf"]}
{"year":"2017","title":"Passfault: an Open Source Tool for Measuring Password Complexity and Strength","authors":["BA Rodrigues, JRB Paiva, VM Gomes, C Morris"],"snippet":"... Wikipedia: The full text of Wikipedia in 2015.Reddit: The corpus of Reddit comments through May 2015.CCrawl: Text extracted from the Common Crawl and language-detected with cld2. Page 6. ACKNOWLEDGMENTS ...","url":["https://www.owasp.org/images/1/13/Artigo-Passfault.pdf"]}
{"year":"2017","title":"Predictor-Estimator using Multilevel Task Learning with Stack Propagation for Neural Quality Estimation","authors":["H Kim, JH Lee, SH Na - WMT 2017, 2017"],"snippet":"... allel corpora including the Europarl corpus, common crawl corpus, news commentary, rapid corpus of EU press releases for the WMT17 translation task3, and src-pe (source sentences-their target post-editions) pairs for the WMT17 QE task. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=586"]}
{"year":"2017","title":"Predictor-Estimator: Neural Quality Estimation Based on Target Word Prediction for Machine Translation","authors":["H Kim, HY Jung, H Kwon, JH Lee, SH Na - ACM Transactions on Asian and Low- …, 2017"],"snippet":"... For training the word predictor, two parallel datasets of different sizes were used: a small dataset consisting of only the Europarl corpus (Koehn 2005) and a large dataset consisting of the Europarl corpus, common crawl corpus, and news commentary, which were provided for ...","url":["http://dl.acm.org/citation.cfm?id=3109480"]}
{"year":"2017","title":"Probabilistic Relation Induction in Vector Space Embeddings","authors":["Z Bouraoui, S Jameel, S Schockaert - arXiv preprint arXiv:1708.06266, 2017"],"snippet":"... data set1 (SG-GN). We also use two embeddings that have been learned with GloVe, one from the same Wikipedia dump (GloVe-Wiki) and one from the 840B words Common Crawl data set2 (GloVe-CC). For relations with at ...","url":["https://arxiv.org/pdf/1708.06266"]}
{"year":"2017","title":"Proposal for Automatic Extraction of Taxonomic Relations in Domain Corpus","authors":["HRL Chavez, MT Vidal - Advances in Pattern Recognition"],"snippet":"His methodology is based on two sources of evidence, substring matches and Hearts patterns. They analyze all Wikipedia in search of the Hearts patterns and extract those relationships and make use of another corpus like GigaWord, ukWac and CommonCrawl. 30","url":["http://www.rcs.cic.ipn.mx/rcs/2017_133/Proposal%20for%20Automatic%20Extraction%20of%20Taxonomic%20Relations%20in%20Domain%20Corpus.pdf"]}
{"year":"2017","title":"ProvDS: Uncertain Provenance Management over Incomplete Linked Data Streams","authors":["Q Liu"],"snippet":"... These datasets will be used to evaluate our provenance computation over incomplete Linked Data Streams techniques. • The Web Data Commons project5 extracts structured data from the Common Crawl, the largest web corpus available to the public. ...","url":["https://iswc2017.semanticweb.org/wp-content/uploads/papers/DC/paper_2.pdf"]}
{"year":"2017","title":"Pushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine Translations","authors":["J Wieting, K Gimpel - arXiv preprint arXiv:1711.05732, 2017"],"snippet":"The model was trained on this data along with data from some smaller Czech sources ( 160k from common-crawl, 650k from Europarlm and 190k from News) … We compare with Eu- roparl, Common-crawl, and News for Czech","url":["https://arxiv.org/pdf/1711.05732"]}
{"year":"2017","title":"Quantext: Analysing student responses to short-answer questions","authors":["J McDonald, ACM Moskal"],"snippet":"1 Similarity is calculated from a word2vec model of word embeddings using the GloVe algorithm (Pennington, Socher & Manning, 2014) and is pre-trained on the Common Crawl Corpus (Spiegler, 2013) … 1532-1543 Spiegler, S (2013) Statistics of the Common Crawl Corpus","url":["https://www.researchgate.net/profile/Adon_Moskal/publication/321266093_Quantext_Analysing_student_responses_to_short-answer_questions/links/5a179890a6fdcc50ade61806/Quantext-Analysing-student-responses-to-short-answer-questions.pdf"]}
{"year":"2017","title":"Question Answering on SQuAD","authors":["C Yang, H Ishfaq"],"snippet":"... Then we use word embeddings from GloVe[6] to map words into embedding vectors. To decrease the out of vocabulary (OOV) error, we use the Common Crawl 840B 300d GloVe vectors. Words not found in GloVe are initialized randomly. ...","url":["https://web.stanford.edu/class/cs224n/reports/2749099.pdf"]}
{"year":"2017","title":"Question Answering on the SQuAD Dataset","authors":["DH Park, V Lakshman"],"snippet":"... Initially, we used 100-dimensional word embeddings pretrained on the Wikipedia corpus to train our model before fine-tuning our system by switching to 300-dimensional GloVe vectors trained on the Common Crawl corpus. ...","url":["https://web.stanford.edu/class/cs224n/reports/2761899.pdf"]}
{"year":"2017","title":"Question Answering with Multi-Perspective Context Matching","authors":["J Asperger"],"snippet":"... The word-level embeddings were taken from GloVe vectors that were pre-trained on the 840-billion-word Common Crawl Corpus. ... For my word representations, I used 300-dimensional GloVe vectors trained on the 840 billion word Common Crawl Corpus. ...","url":["https://pdfs.semanticscholar.org/599f/376502c61550fdd37011e0cb7157d281b493.pdf"]}
{"year":"2017","title":"Reading Comprehension on the SQuAD Dataset","authors":["FNU Budianto"],"snippet":"... The Glove version used is the Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors). Figure 1: Histogram of context length, question length, and answer length in the training set. 2 Page 3. ... I use the Glove840B300d Common Crawl for the word embedding layer. ...","url":["https://web.stanford.edu/class/cs224n/reports/2762006.pdf"]}
{"year":"2017","title":"Recurrent neural networks with specialized word embeddings for health-domain named-entity recognition","authors":["IJ Unanue, EZ Borzeshi, M Piccardi - arXiv preprint arXiv:1706.09569, 2017"],"snippet":"... Therefore, the training of the word embeddings only requires large, general-purpose text corpora such as Wikipedia (400K unique words) or Common Crawl (2.2M unique words), without the need for any manual annotation. ...","url":["https://arxiv.org/pdf/1706.09569"]}
{"year":"2017","title":"Regularizing neural networks by penalizing confident output distributions","authors":["G Pereyra, G Tucker, J Chorowski, Ł Kaiser, G Hinton - arXiv preprint arXiv: …, 2017"],"snippet":"... 535541. ACM, 2006. Christian Buck, Kenneth Heafield, and Bas Van Ooyen. N-gram counts and language models from the common crawl. In LREC, volume 2, pp. 4. Citeseer, 2014. 8 Page 9. Under review as a conference paper at ICLR 2017 ...","url":["https://arxiv.org/pdf/1701.06548"]}
{"year":"2017","title":"Reinvestigating the Classification Approach to the Article and Preposition Error Correction","authors":["R Grundkiewicz, M Junczys-Dowmunt"],"snippet":"... Other than that, default options were used. We learnt word vectors from 75 millions of English sentences extracted from Common Crawl data4. ... 3 https://code.google.com/p/word2vec/ 4 https://commoncrawl.org/ 5 http://www.comp.nus.edu.sg/~nlp/conll14st.html Page 6. ...","url":["http://www.research.ed.ac.uk/portal/files/40342436/ltc_073_grundkiewicz_2.pdf"]}
{"year":"2017","title":"Report on the 2nd Workshop on Managing the Evolution and Preservation of the Data Web (MEPDaW 2016)","authors":["J Debattista, JD Fernández, J Umbrich"],"snippet":"... 1Slides of the talk: https://aic.ai.wu.ac.at/ polleres/presentations/20160530Keynote-MEPDaW2016. pdf 2http://commoncrawl.org/ 3http://internetmemory.org/ 4https://archive.org/index.php 5http://swse.deri.org/dyldo/ ACM SIGIR Forum 84 Vol. 50 No. 2 December 2016 Page 4. ...","url":["https://pdfs.semanticscholar.org/c1eb/93952ed5cc4bda08bdd75bed84332656d864.pdf"]}
{"year":"2017","title":"Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging","authors":["N Reimers, I Gurevych - arXiv preprint arXiv:1707.09861, 2017"],"snippet":"... (2014) trained either on Wikipedia 2014 + Gigaword 5 (GloVe1 with 100 dimensions and GloVe2 with 300 dimensions) or on Common Crawl (GloVe3), and the Komninos and Manandhar (2016) embeddings (Komn.)10. We also evaluate the approach of Bojanowski et al. ...","url":["https://arxiv.org/pdf/1707.09861"]}
{"year":"2017","title":"Representation Stability as a Regularizer for Improved Text Analytics Transfer Learning","authors":["M Riemer, E Khabiri, R Goodwin - arXiv preprint arXiv:1704.03617, 2017"],"snippet":"... Our GRU model was fed a sequence of fixed 300 dimensional Glove vectors (Pennington et al., 2014), representing words based on analysis of 840 billion words from a common crawl of the internet, as the input xt for all tasks. ...","url":["https://arxiv.org/pdf/1704.03617"]}
{"year":"2017","title":"Representing Sentences as Low-Rank Subspaces","authors":["J Mu, S Bhat, P Viswanath - arXiv preprint arXiv:1704.05358, 2017"],"snippet":"... Due to the widespread use of word2vec and GloVe, we use their publicly available word representations – word2vec(Mikolov et al., 2013) trained us- ing Google News1 and GloVe (Pennington et al., 2014) trained using Common Crawl2 – to test our observations. ...","url":["https://arxiv.org/pdf/1704.05358"]}
{"year":"2017","title":"Retrieval, Crawling and Fusion of Entity-centric Data on the Web","authors":["S Dietze"],"snippet":"... Page 8. linked data world. However, the question to what extent this is due to the se- lective content of the Common Crawl or representative for schema.org adoption on the Web in general requires additional investigations. (a ...","url":["https://www.researchgate.net/profile/Stefan_Dietze/publication/312490472_Retrieval_Crawling_and_Fusion_of_Entity-centric_Data_on_the_Web/links/587e683808aed3826af45f18.pdf"]}
{"year":"2017","title":"Rule-based spreadsheet data transformation from arbitrary to relational tables","authors":["AO Shigarov, AA Mikhailov - Information Systems, 2017"],"snippet":"... These include about 50% of tables presented in 0.4M spreadsheets of ClueWeb09 Crawl 1 [5] and 147M (61%) of 233M web tables extracted from Common Crawl 2 [3]. They lack explicit semantics required for computer programs to interpret their layout and content. ...","url":["http://www.sciencedirect.com/science/article/pii/S0306437917304301"]}
{"year":"2017","title":"S3C: An Architecture for Space-Efficient Semantic Search over Encrypted Data in the Cloud","authors":["J Woodworth, MA Salehi, V Raghavan"],"snippet":"... To evaluate our system under Big data scale datasets, we utilized a second dataset, the Common Crawl Corpus from AWS, a web crawl composed of over five billion web pages We evaluated our system against the RFC using three types of metrics: Performance, Overhead ...","url":["http://hpcclab.org/paperPdf/bigdata16/bigdata16.pdf"]}
{"year":"2017","title":"Scattertext: a Browser-Based Tool for Visualizing how Corpora Differ","authors":["JS Kessler - arXiv preprint arXiv:1703.00565, 2017"],"snippet":"Page 1. Scattertext: a Browser-Based Tool for Visualizing how Corpora Differ Jason S. Kessler CDK Global [email protected] Abstract Scattertext is an open source tool for visualizing linguistic variation between document categories in a language-independent way. ...","url":["https://arxiv.org/pdf/1703.00565"]}
{"year":"2017","title":"Scientific Literature Text Mining and the Case for Open Access","authors":["G Sarma"],"snippet":"… science and society of scientific literature text mining. We need a scientific analogue to CommonCrawl, an open respository of scientific articles for use in exploratory data analysis. Ironically, this argument is not new, and indeed …","url":["https://www.tjoe.org/pub/scientific-literature-text-mining-and-the-case-for-open-access"]}
{"year":"2017","title":"Secure Semantic Search Over Encrypted Big Data in the Cloud","authors":["JW Woodworth - 2017"],"snippet":"Page 1. Secure Semantic Search Over Encrypted Big Data in the Cloud A Dissertation Presented to the Graduate Faculty of the University of Louisiana at Lafayette In Partial Fulfillment of the Requirements for the Degree Master's of Science Jason W. Woodworth Spring 2017 ...","url":["http://hpcclab.org/theses/jasonwoodworth17.pdf"]}
{"year":"2017","title":"SEF@ UHH at SemEval-2017 Task 1: Unsupervised knowledge-free semantic textual similarity via paragraph vector","authors":["MS Duma, W Menzel - Proceedings of SemEval-2017. http://www. aclweb. org/ …, 2017"],"snippet":"... Track / Corpora AR-AR AR-EN ES-ES ES-EN EN-EN TR-EN Commoncrawl - - 1.84M - 2.39M - Wikipedia 151K 151K - 1.81M - 160K TED 152K 152K - 157K - 137K MultiUN 1M 1M - - - - EUBookshop - - - - - 23K SETIMES - - - - - 207K Tatoeba - - - - - 156K SNLI* - 150K - 150K ...","url":["https://www.aclweb.org/anthology/S/S17/S17-2024.pdf"]}
{"year":"2017","title":"Selective Decoding for Cross-lingual Open Information Extraction","authors":["S Zhang, K Duh, B Van Durme"],"snippet":"... The word embedding size is 300 for input tokens on both the encoder side and the decoder side. We use open-source GloVe vectors (Pennington et al., 2014) trained on Common Crawl 840B with 300 dimensions6 to initialize the word embeddings on the decoder side. ...","url":["https://www.cs.jhu.edu/~s.zhang/assets/pdf/selective-decoding.pdf"]}
{"year":"2017","title":"Semantic Specialisation of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints","authors":["N Mrkšić, I Vulić, DÓ Séaghdha, I Leviant, R Reichart… - arXiv preprint arXiv: …, 2017"],"snippet":"... The first four languages are those of the Multilingual SimLex-999 dataset. For the four SimLex languages, we employ four well-known, high-quality word vector collections: a) The Common Crawl GloVe English vectors from Pennington et al. ...","url":["https://arxiv.org/pdf/1706.00374"]}
{"year":"2017","title":"Semantic vector evaluation and human performance on a new vocabulary MCQ test","authors":["JP Levy, JA Bullinaria, S McCormick"],"snippet":"... The 42B and 840B vectors were generated from 42 billion and 840 billion word corpora derived from Common Crawl archives (obtained by an automated process of systematically browsing the web). All the GloVe vectors used here have 300 dimensions. ...","url":["https://pdfs.semanticscholar.org/6506/d7783d2297f70c15a8caa07f022c36dfb168.pdf"]}
{"year":"2017","title":"Semantic-based Analysis of Javadoc Comments","authors":["A Blasi, K Kuznetsov, A Goffi, SD Castellanos, A Gorla…"],"snippet":"... In our preliminary tests we found that the publicly available pre-trained word vectors of the GloVe model based on Common Crawl dataset2 already produce good results, as they identify relations such as: “if vertex exists” ; graph.containsVertex(v) and “if the graph contains the ...","url":["http://sattose.wdfiles.com/local--files/2017:schedule/SATToSE_2017_paper_24.pdf"]}
{"year":"2017","title":"Semantics derived automatically from language corpora contain human-like biases","authors":["A Caliskan, JJ Bryson, A Narayanan - Science, 2017"],"snippet":"... We used the largest of the four corpora provided—the “Common Crawl” corpus obtained from a large-scale crawl of the Internet, containing 840 billion tokens (roughly, words). Tokens in this corpus are case sensitive, resulting in 2.2 million different ones. ...","url":["http://science.sciencemag.org/content/356/6334/183.abstract"]}
{"year":"2017","title":"Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation","authors":["D Cer, M Diab, E Agirre, I Lopez-Gazpio, L Specia - Proceedings of the 11th …, 2017"],"snippet":"Page 1. Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 1–14, Vancouver, Canada, August 3 - 4, 2017. cO2017 Association for Computational Linguistics SemEval-2017 Task ...","url":["http://nlp.arizona.edu/SemEval-2017/pdf/SemEval001.pdf"]}
{"year":"2017","title":"Sentence Embedding for Neural Machine Translation Domain Adaptation","authors":["R Wang, A Finch, M Utiyama, E Sumita"],"snippet":"... Out- of-domain corpora contained Common Crawl, Europarl v7, News Commentary v10 and United Nation (UN) EN-FR parallel corpora.4 • NIST 2006 Chinese (ZH) to English corpus 5 was used as the in-domain training corpus, following the settings of (Wang et al., 2014). ...","url":["https://www.aclweb.org/anthology/P/P17/P17-2089.pdf"]}
{"year":"2017","title":"SentiHeros at SemEval-2017 Task 5: An application of Sentiment Analysis on Financial Tweets","authors":["N Tabari, A Seyeditabari, W Zadrozny"],"snippet":"... In two separate experiments, we used vectors based on the Common Crawl (840B tokens, 2.2M vo- cab, cased, 300 dimensions), and the pre-trained word vectors for Twitter (2B tweets, 27B tokens, 1.2M vocab, 200 dimensions). ...","url":["http://nlp.arizona.edu/SemEval-2017/pdf/SemEval146.pdf"]}
{"year":"2017","title":"Shallow reading with Deep Learning: Predicting popularity of online content using only its title","authors":["K Marasek, P law Rokita","W Stokowiec, T Trzcinski, K Wolk, K Marasek, P Rokita - arXiv preprint arXiv: …, 2017"],"snippet":"... As a text embedding in our experiments, we use publicly available GloVe word vectors [12] pre-trained on two datasets: Wikipedia 2014 with Gigaword5 (W+G5) and Common Crawl (CC)7. Since their output dimensionality can be modified, we show the results for varying ...","url":["http://ii.pw.edu.pl/~ttrzcins/papers/ISMIS_2017_paper_57.pdf","https://arxiv.org/pdf/1707.06806"]}
{"year":"2017","title":"Simple Dynamic Coattention Networks","authors":["W Wu"],"snippet":"... unk〉. This affected the accuracy of predicted answers, as seen from Table 3. To reduced the number of unknown words, the Common Crawl GloVe vectors, which has a larger vocabulary, should be used instead. Document ...","url":["https://pdfs.semanticscholar.org/6a79/6c1c9c30913cb24d64939f90dcb06fa82be7.pdf"]}
{"year":"2017","title":"Six Challenges for Neural Machine Translation","authors":["P Koehn, R Knowles - arXiv preprint arXiv:1706.03872, 2017"],"snippet":"... BLEU scores of 34.5 on the WMT 2016 news test set (for the NMT model, this reflects the BLEU score re- sulting from translation with a beam size of 1). We use a single corpus for computing our lexical frequency counts (a concatenation of Common Crawl, Europarl, and News ...","url":["https://arxiv.org/pdf/1706.03872"]}
{"year":"2017","title":"Sockeye: A Toolkit for Neural Machine Translation","authors":["F Hieber, T Domhan, M Denkowski, D Vilar, A Sokolov… - arXiv preprint arXiv …, 2017"],"snippet":"… 9 Page 10. EN→DE LV→EN Dataset Sentences Tokens Types Sentences Tokens Types Europarl v7/v8 1,905,421 91,658,252 862,710 637,687 27,256,803 437,914 Common Crawl 2,394,616 97,473,856 3,655,645 - - - News Comm. v12 270,088 11,990,594 460,220 …","url":["https://arxiv.org/pdf/1712.05690"]}
{"year":"2017","title":"Specialising Word Vectors for Lexical Entailment","authors":["I Vulić, N Mrkšić - arXiv preprint arXiv:1710.06371, 2017"],"snippet":"... experiment with a variety of well-known, publicly available English word vectors: 1) Skip-Gram with Negative Sampling (SGNS) (Mikolov et al., 2013) trained on the Polyglot Wikipedia (Al-Rfou et al., 2013) by Levy and Goldberg (2014); 2) GLOVE Common Crawl (Pennington et ...","url":["https://arxiv.org/pdf/1710.06371"]}
{"year":"2017","title":"SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering","authors":["L Xu, W Dou, C Gao, J Wang, J Wei, H Zhong, T Huang"],"snippet":"Page 1. SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering Liang Xu1,2, Wensheng Dou1*, Chushu Gao1, Jie Wang1,2, Jun Wei1,2, Hua Zhong1, Tao Huang1 1State Key Laboratory of ...","url":["http://www.tcse.cn/~wsdou/papers/2017-msr-spreadcluster.pdf"]}
{"year":"2017","title":"SQuAD Question Answering using Multi-Perspective Matching","authors":["Z Maurer, S Desai, S Usmani"],"snippet":"... in some cases. In terms of future work to improve on our models, we can use 840B Common Crawl GloVe word vectors rather than the Glove word vectors pretrained on Wikipedia 2014 and Gigaword5. Given additional computational ...","url":["https://pdfs.semanticscholar.org/3b1a/a646bdc6daab268f6763b829686b00263333.pdf"]}
{"year":"2017","title":"Story Cloze Ending Selection Baselines and Data Examination","authors":["M Armstrong","T Mihaylov, A Frank - arXiv preprint arXiv:1703.04330, 2017"],"snippet":"Our contribution is that we set a new baseline for the task, showing that a simple linear model based on distributed representations and semantic similarity features achieves state-of-the-art results. We also evaluate the ability of different embedding …","url":["https://arxiv.org/pdf/1703.04330","https://zdoc.pub/story-cloze-ending-selection-baselines-and-data-examination.html"]}
{"year":"2017","title":"Stronger Baselines for Trustable Results in Neural Machine Translation","authors":["M Denkowski, G Neubig - arXiv preprint arXiv:1706.09733, 2017"],"snippet":"... Scenario Size (sent) Sources WMT German-English 4,562,102 Europarl, Common Crawl, news commentary WMT English-Finnish 2,079,842 Europarl, Wikipedia titles WMT Romanian-English 612,422 Europarl, SETimes IWSLT English-French 220,400 TED talks IWSLT Czech ...","url":["https://arxiv.org/pdf/1706.09733"]}
{"year":"2017","title":"Structured Attention Networks","authors":["Y Kim, C Denton, L Hoang, AM Rush - arXiv preprint arXiv:1702.00887, 2017"],"snippet":"Page 1. Under review as a conference paper at ICLR 2017 STRUCTURED ATTENTION NETWORKS Yoon Kim∗ Carl Denton∗ Luong Hoang Alexander M. Rush {yoonkim@seas,carldenton@college,lhoang@g,srush@seas ...","url":["https://arxiv.org/pdf/1702.00887"]}
{"year":"2017","title":"Supervised Learning of Universal Sentence Representations from Natural Language Inference Data","authors":["A Conneau, D Kiela, H Schwenk, L Barrault, A Bordes - arXiv preprint arXiv: …, 2017"],"snippet":"... 512 hidden units. We use opensource GloVe vectors trained on Common Crawl 840B2 with 300 dimensions as fixed word embeddings and initialize other word vectors to random values sampled from U(-0.1,0.1). Input sen ...","url":["https://arxiv.org/pdf/1705.02364"]}
{"year":"2017","title":"SVD-Softmax: Fast Softmax Approximation on Large Vocabulary Neural Networks","authors":["K Shim, M Lee, I Choi, Y Boo, W Sung - Advances in Neural Information Processing …, 2017"],"snippet":"… 5, pp. 79–86. [29] Common Crawl Foundation, “Common crawl,” http://commoncrawl.org, 2016, Accessed: 2017-04-11. [30] Jorg Tiedemann, “Parallel data, tools and interfaces in OPUS,” in LREC, 2012, vol. 2012, pp. 2214–2218. 10 Page 11 …","url":["http://papers.nips.cc/paper/7130-svd-softmax-fast-softmax-approximation-on-large-vocabulary-neural-networks.pdf"]}
{"year":"2017","title":"SwissLink: High-Precision, Context-Free Entity Linking Exploiting Unambiguous Labels","authors":["R Prokofyev, M Luggen, DE Difallah, P Cudré-Mauroux - 2017"],"snippet":"… In order to understand how annotations are used on the Web, we crawled all entity links found on two large datasets, by processing the CommonCrawl 3 and the Wikipedia dumps 4. The output of our processing is a list of all words and phrases that were used as anchors in …","url":["https://exascale.info/assets/pdf/swisslink-semantics2017.pdf"]}
{"year":"2017","title":"Syntax-Directed Attention for Neural Machine Translation","authors":["K Chen, R Wang, M Utiyama, E Sumita, T Zhao - arXiv preprint arXiv:1711.04231, 2017"],"snippet":"… 4.1 Data sets The proposed methods were evaluated on two data sets. • For English (EN) to German (DE) translation task, 4.43 million bilingual sentence pairs of the WMT'14 data set was used as the training data, including Common Crawl, News Commentary and Europarl v7 …","url":["https://arxiv.org/pdf/1711.04231"]}
{"year":"2017","title":"SYSTRAN Purely Neural MT Engines for WMT2017","authors":["Y Deng, J Kim, G Klein, C Kobus, N Segal, C Servan… - WMT 2017, 2017"],"snippet":"... 3.1 Corpora We used the parallel corpora made available for the shared task: Europarl v7, Common Crawl corpus, News Commentary v12 and Rapid corpus of EU press releases. Both English and German texts were preprocessed with standard tokenisation tools. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=289"]}
{"year":"2017","title":"Table Identification and Reconstruction in Spreadsheets","authors":["E Koci, M Thiele, O Romero, W Lehner - International Conference on Advanced …, 2017"],"snippet":"... This corpus is of a particular interest, since it provides access to real-world business spreadsheets used in industry. The third corpus is FUSE [3] that contains 249, 376 unique spreadsheets, extracted from Common Crawl 6 . ...","url":["http://link.springer.com/chapter/10.1007/978-3-319-59536-8_33"]}
{"year":"2017","title":"Tagging Patient Notes With ICD-9 Codes","authors":["S Ayyar"],"snippet":"... For every word we obtained pretrained word vectors from Glove (Common Crawl 840 billion tokens, 2.2 million vocab of dimension size 300)[7]. Since our text consists of translated text from clinical notes, there are several misrepresentations or errors in spellings of words ...","url":["https://web.stanford.edu/class/cs224n/reports/2744196.pdf"]}
{"year":"2017","title":"Taking into account Inter-sentence Similarity for Update Summarization","authors":["G de Chalendar, O Ferret - Proceedings of the Eighth International Joint …, 2017"],"snippet":"MCL-GLOVE-ICSISumm. In this run, we used 2.2 million word vectors (300 dimensions) trained with GloVe (Pennington et al., 2014) on the 840 billion tokens from the Common Crawl repository. • MCL-ConceptNet-ICSISumm","url":["http://www.aclweb.org/anthology/I17-2035"]}
{"year":"2017","title":"Taxonomy Induction using Hypernym Subsequences","authors":["A Gupta, R Lebret, H Harkous, K Aberer - arXiv preprint arXiv:1704.07626, 2017"],"snippet":"... A prominent ex- ample of such a resource is WebIsA [Seitner et al., 2016], a collection of more than 400 million hypernymy relations for English, extracted from the CommonCrawl web corpus using lexico-syntactic patterns. However ...","url":["https://arxiv.org/pdf/1704.07626"]}
{"year":"2017","title":"TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question Answering","authors":["Y Jang, Y Song, Y Yu, Y Kim, G Kim - arXiv preprint arXiv:1704.04497, 2017"],"snippet":"... We then generate multiple choice options for each QA pair, selecting four phrases from our dataset. Specifically, we represent all verbs in our dictionary as a 300D vector using the GloVe word embedding [26] pretrained on the Common Crawl dataset. ...","url":["https://arxiv.org/pdf/1704.04497"]}
{"year":"2017","title":"The AFRL-MITLL WMT17 Systems: Old, New, Borrowed, BLEU","authors":["J Gwinnup, T Anderson, G Erdmann, K Young, M Kazi… - WMT 2017, 2017"],"snippet":"... 2.1 Data Used We utilized all available data sources provided for the language pairs we participated in, including the Commoncrawl (Smith et ... For Russian we conducted monolingual selection from provided Common Crawl, to match test sets from 2012-2016 (15K lines total). ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=327"]}
{"year":"2017","title":"The Effect of Translationese on Tuning for Statistical Machine Translation","authors":["S Stymne"],"snippet":"... For training we used Europarl and News commentary, provided by WMT, with a total of over 2M segments for German and French and .77M for Czech. For EnglishGerman we used additional data: bilingual Common Crawl (1.5M) and monolingual News (83M). ...","url":["http://www.ep.liu.se/ecp/131/030/ecp17131030.pdf"]}
{"year":"2017","title":"The Helsinki Neural Machine Translation System","authors":["R Östling, Y Scherrer, J Tiedemann, G Tang… - arXiv preprint arXiv: …, 2017"],"snippet":"... Another common outcome in SMT is the strong impact of language models. We can confirm this once again. Adding a second language model trained on common-crawl data (CC) has a strong influence on translation quality as we can see by the BLEU scores in Table 5. ...","url":["https://arxiv.org/pdf/1708.05942"]}
{"year":"2017","title":"The HIT-SCIR System for End-to-End Parsing of Universal Dependencies","authors":["W Che, J Guo, Y Wang, B Zheng, H Zhao, Y Liu… - CoNLL 2017, 2017"],"snippet":"... 4.1. 2 Data and Tools We use the provided 100-dimensional multilingual word embeddings5 in our tokenization, POS tagging and parsing models, and use the Wikipedia and CommonCrawl data for training Brown clusters. The number of clusters is set to 256. ...","url":["https://www.aclweb.org/anthology/K/K17/K17-3.pdf#page=64"]}
{"year":"2017","title":"The Karlsruhe Institute of Technology Systems for the News Translation Task in WMT 2017","authors":["NQ Pham, J Niehues, TL Ha, E Cho, M Sperber… - WMT 2017, 2017"],"snippet":"... 2.1 GermanEnglish As parallel data for our GermanEnglish systems, we used Europarl v7 (EPPS), News Commentary v12 (NC), Rapid corpus of EU press releases, Common Crawl corpus, and simulated data. Except ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=390"]}
{"year":"2017","title":"The RWTH Aachen University English-German and German-English Machine Translation System for WMT 2017","authors":["JT Peter, A Guta, T Alkhouli, P Bahar, J Rosendahl"],"snippet":"... Both models are trained on all monolingual corpora, except the commoncrawl corpus, and the target side of the bilingual data (Section 4.2), which sums up to 365.44M sentences and 7230.15M running words, respectively. ...","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1048/PeterJan-ThorstenGutaAndreasAlkhouliTamerBaharParniaRosendahlJanRossenbachNickGra%E7aMiguelNeyHermann--TheRWTHAachenUniversityEnglish-GermanGerman-EnglishMachineTranslationSystemforWMT2017--2017.pdf"]}
{"year":"2017","title":"The TALP-UPC Neural Machine Translation System for German/Finnish-English Using the Inverse Direction Model in Rescoring","authors":["C Escolano, MR Costa-jussà, JAR Fonollosa - … of the Second Conference on Machine …, 2017"],"snippet":"... 4.1 Data and Preprocess For the three language pairs that we experimented with, we used all data parallel data available in the evaluation1. For German-English, we used: europarl v.7, news commentary v.12, common crawl and rapid corpus of EU press releases. ...","url":["http://www.aclweb.org/anthology/W17-4725"]}
{"year":"2017","title":"The UMD Machine Translation Systems at IWSLT 2016: English-to-French Translation of Speech Transcripts","authors":["X Niu, M Carpuat - Proceedings of the ninth International Workshop on …, 2016"],"snippet":"... Corpus # Sentences # Words (en/fr) OpenSubtitles 33.5 M 284.0 M / 268.3 M MultiUN 13.2 M 367.1 M / 432.3 M Common Crawl 3.2 M 81.1 M / 91.3 M Europarl v7 2.0 M 55.7 M / 61.9 M Wikipedia 396 k 9.7 M / 8.7 M TED corpus 207 k 4.5 M / 4.8 M News Commentary v10 199 k ...","url":["http://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_26.pdf"]}
{"year":"2017","title":"The UMD Neural Machine Translation Systems [at WMT17 Bandit Learning Task","authors":["A Sharaf, S Feng, K Nguyen, K Brantley, H Daumé III - arXiv preprint arXiv: …, 2017"],"snippet":"... sider 40k sentences). Using this monolingual data, we use data selection on a large corpus of parallel out-of-domain data (Europarl, NewsCommentary, CommonCrawl, Rapid) to seed an initial translation model. Overall, the ...","url":["https://arxiv.org/pdf/1708.01318"]}
{"year":"2017","title":"The University of Edinburgh's Neural MT Systems for WMT17","authors":["R Sennrich, A Birch, A Currey, U Germann, B Haddow… - arXiv preprint arXiv: …, 2017"],"snippet":"... the whole of CzEng 1.6pre (Bojar et al., 2016), plus the latest WMT releases of Europarl, News-commentary and CommonCrawl... We use the following resources from the WMT parallel data: News Commentary v12, Common Crawl, Yandex Corpus and UN Parallel Corpus V1.0 ...","url":["https://arxiv.org/pdf/1708.00726"]}
{"year":"2017","title":"The University of Edinburgh's systems submission to the MT task at IWSLT","authors":["M Junczys-Dowmunt, A Birch - Proceedings of the ninth International Workshop on …, 2016"],"snippet":"... Commoncrawl [3] 2.3M 3.2M Europarl v7 [4] 1.9M 2.0M Giga Fr-En [3] – 22.5M News Commentary v11 [3] 0.2M 0.2M Opensubtitles 2016 [5] 13.4M 33.5M ... [7] C. Buck, K. Heafield, and B. van Ooyen, “N-gram counts and language models from the common crawl,” in Proceedings ...","url":["http://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_27.pdf"]}
{"year":"2017","title":"The Web Data Commons Structured Data Extraction","authors":["A Primpeli, R Meusel, C Bizer, H Stuckenschmidt - 2017"],"snippet":"... for the year 2016. The Web Data Commons project extracts structured data from the web corpus provided by Common Crawl, the largest public web corpus, and offers the extracted data for public download. In order to process ...","url":["http://archiv.ub.uni-heidelberg.de/volltextserver/22891/"]}
{"year":"2017","title":"To Parse or Not to Parse: An Experimental Comparison of RNTNs and CNNs for Sentiment Analysis","authors":["Z Ahmadi, A Stier, M Skowron, S Kramer"],"snippet":"... On other datasets, we use the model trained on the web data from Common Crawl which contains a case-sensitive vocabulary of size 2.2 million. In all the experiments, the size of the word vector, the minibatch and the epochs were set to 25, 20 and 100, respectively. ...","url":["http://ceur-ws.org/Vol-1874/paper_1.pdf"]}
{"year":"2017","title":"TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING:“TAIGASYNTAX TREE CORPUS AND PARSER","authors":["TO Shavrina, O Shapovalova - КОРПУСНАЯ ЛИНГВИСТИКА–2017"],"snippet":"For Russian language, large collections of web corpora are assembled—projects like RuTenTen and Aranea Russicum, which are not available for downloading, unlike resources based on Common Crawl, but all these corpora are crawled from an unbalanced set of links and …","url":["https://dspace.spbu.ru/bitstream/11701/8786/1/%D0%9A%D0%BE%D1%80%D0%BF%D1%83%D1%81%D0%BD%D0%B0%D1%8F%20%D0%BB%D0%B8%D0%BD%D0%B3%D0%B2%D0%B8%D1%81%D1%82%D0%B8%D0%BA%D0%B0-2017%20%28%D1%82%D1%80%D1%83%D0%B4%D1%8B%20%D0%BC%D0%B5%D0%B6%D0%B4.%20%D0%BA%D0%BE%D0%BD%D1%84%D0%B5%D1%80.%29.pdf#page=78"]}
{"year":"2017","title":"Topics in Data Science/Өгөгдлийн шинжлэх ухаан","authors":["R Womack - 2017"],"snippet":"Page 1. Topics in Data Science / Өгөгдлийн шинжлэх ухаан Rutgers University has made this article freely available. Please share how this access benefits you. Your story matters. [https://rucore.libraries.rutgers.edu/rutgers-lib/52378/story/] …","url":["https://rucore.libraries.rutgers.edu/rutgers-lib/52378/PDF/1/"]}
{"year":"2017","title":"Toward intrusion detection using belief decision trees for big data","authors":["I Boukhris, Z Elouedi, M Ajabi - Knowledge and Information Systems, 2017"],"snippet":"Page 1. Knowl Inf Syst DOI 10.1007/s10115-017-1034-4 REGULAR PAPER Toward intrusion detection using belief decision trees for big data Imen Boukhris1 · Zied Elouedi1 · Mariem Ajabi1 Received: 3 December 2015 / Accepted ...","url":["http://link.springer.com/article/10.1007/s10115-017-1034-4"]}
{"year":"2017","title":"Towards Accurate Duplicate Bug Retrieval Using Deep Learning Techniques","authors":["J Deshmukh, S Podder, S Sengupta, N Dubash - Software Maintenance and …, 2017"],"snippet":"Each word in the dictionary was then mapped to its corresponding embedding. We experimented with both GloVe vectors trained3 on Common Crawl dataset as well as Word2Vec vectors trained4 on Google news dataset. We","url":["http://ieeexplore.ieee.org/abstract/document/8094414/"]}
{"year":"2017","title":"Towards Automatic Identification of Fake News: Headline-Article Stance Detection with LSTM Attention Models","authors":["S Chopra, S Jain, JM Sholar - 2017"],"snippet":"... 3 Page 4. of Wikipedia and Common Crawl. We further created a randomly initialized UNK vector of zeros, for words that were not found in the GloVe set. 5.4 LSTM Attention Architectures 5.4.1 Conditionally Encoded (CE) LSTMs ...","url":["https://pdfs.semanticscholar.org/eecc/5781c826a0af8229b8a24a6fca3d3e48b0fa.pdf"]}
{"year":"2017","title":"Towards Automatically Evaluating Security Risks and Providing Cyber Intelligence","authors":["X Liao - 2017"],"snippet":"Page 1. TOWARDS AUTOMATICALLY EVALUATING SECURITY RISKS AND PROVIDING CYBER INTELLIGENCE A Thesis Presented to The Academic Faculty by Xiaojing Liao In Partial Fulfillment of the Requirements for ...","url":["https://smartech.gatech.edu/bitstream/handle/1853/58679/LIAO-DISSERTATION-2017.pdf?sequence=1&isAllowed=y"]}
{"year":"2017","title":"Towards Document-Level Neural Machine Translation","authors":["L Miculicich Werlen - 2017"],"snippet":"Page 1. TROPE R HCRAESE R PAID I TOWARDS DOCUMENT-LEVEL NEURAL MACHINE TRANSLATION Lesly Miculicich Werlen Idiap-RR-25-2017 SEPTEMBER 2017 Centre du Parc, Rue Marconi 19, PO Box 592, CH ...","url":["https://infoscience.epfl.ch/record/231129/files/MiculicichWerlen_Idiap-RR-25-2017.pdf"]}
{"year":"2017","title":"Towards Semantic Query Segmentation","authors":["A Kale, T Taula, S Hewavitharana, A Srivastava - arXiv preprint arXiv:1707.07835, 2017"],"snippet":"... estimators. This process was repeated with pretrained GloVe vectors on common crawl [14] and facebook fasttext [2] pretrained model over Wikipedia corpus with 2.5M word vocabulary. 2 shows the experiment results. We ...","url":["https://arxiv.org/pdf/1707.07835"]}
{"year":"2017","title":"Towards the ImageNet-CNN of NLP: Pretraining Sentence Encoders with Machine Translation","authors":["B McCann, J Bradbury, C Xiong, R Socher - Advances in Neural Information …, 2017"],"snippet":"When training an MT-LSTM, we used fixed 300-dimensional word vectors. We used the CommonCrawl-840B GloVe model for English word vectors, which were completely fixed during training, so that the MT-LSTM had to learn how to use the pretrained vectors for translation …","url":["http://papers.nips.cc/paper/7209-towards-the-imagenet-cnn-of-nlp-pretraining-sentence-encoders-with-machine-translation.pdf"]}
{"year":"2017","title":"TraininG towards a society of data-saVvy inforMation prOfessionals to enable open leadership INnovation","authors":["T Blume, F Böschen, L Galke, A Saleh, A Scherp - 2017"],"snippet":"Page 1. Deliverable 3.1: Technologies for MOVING data processing and visualisation v1.0 Till Blume, Falk Böschen, Lukas Galke, Ahmed Saleh, Ansgar Scherp, Matthias Schulte-Althoff/ZBW Chrysa Collyda, Vasileios Mezaris, Alexandros Pournaras, Christos Tzelepis/CERTH ...","url":["http://moving-project.eu/wp-content/uploads/2017/04/moving_d3.1_v1.0.pdf"]}
{"year":"2017","title":"Translation Quality and Productivity: A Study on Rich Morphology Languages","authors":["L Specia, K Harris, F Blain, A Burchardt, V Macketanz"],"snippet":"... This process resulted in: • ENDE: Over 20 million generic and in-domain sentence pairs obtained by merging the datasets available in the OPUS (Tiedemann, 2012), TAUS, WMT and JRC 3 repositories (eg Europarl, CDEP, CommonCrawl, etc.); ...","url":["https://fredblain.org/papers/pdf/specia_et_al_2017_translation_quality_and_productivity.pdf"]}
{"year":"2017","title":"Translation Quality Estimation Using only bilingual Corpora","authors":["L Liu, A Fujita, M Utiyama, A Finch, E Sumita - IEEE/ACM Transactions on Audio, …, 2017"],"snippet":"... languages. As the bilingual corpora for conducting M2LE training, we employed Europarl and Common Crawl provided by WMT13 for the WMT15 and WMT14 tasks and a JapaneseChinese bilingual corpus [9] for the JA2ZH task. ...","url":["http://ieeexplore.ieee.org/abstract/document/7949019/"]}
{"year":"2017","title":"TSP: Learning Task-Specific Pivots for Unsupervised Domain Adaptation","authors":["X Cui, F Coenen, D Bollegala"],"snippet":"... We use the publicly available D = 300 dimensional GloVe4 (trained using 42B tokens from the Common Crawl) and CBOW5 (trained using 100B tokens from Google News) embeddings as the word representations required by TSP. ...","url":["https://cgi.csc.liv.ac.uk/~danushka/papers/Xia_ECML_2017.pdf"]}
{"year":"2017","title":"Two-Stage Synthesis Networks for Transfer Learning in Machine Comprehension","authors":["D Golub, PS Huang, X He, L Deng - arXiv preprint arXiv:1706.09789, 2017"],"snippet":"... We initialize word-embeddings for the BIDAF model, answer synthesis module, and question synthesis module with 300-dimensional-GloVe vectors (Pennington et al., 2014) trained on the 840B Common Crawl corpus. We set all embeddings of unknown word tokens to zero. ...","url":["https://arxiv.org/pdf/1706.09789"]}
{"year":"2017","title":"Two-Step MT: Predicting Target Morphology","authors":["F Burlot, E Knyazeva, T Lavergne, F Yvon - 2016"],"snippet":"... from TED training set Full TED set (117k) + QED (242k) + europarl (885k) + news-commentary (1M) Monolingual data (various subsets ranging from 5M to 200M): Target side of the biggest parallel corpus Czeng-1.6-pre subtitles news corpora (WMT'16) common-crawl (WMT'16 ...","url":["http://workshop2016.iwslt.org/downloads/IWSLT16_Burlot.pdf"]}
{"year":"2017","title":"Unbounded cache model for online language modeling with open vocabulary","authors":["E Grave, M Cisse, A Joulin - arXiv preprint arXiv:1711.02604, 2017"],"snippet":"... In the following, we refer to this dataset as commentary. • Common Crawl is a text dataset collected from diverse web sources. The dataset is shuffled at the sentence level. ... [9] C. Buck, K. Heafield, and B. van Ooyen. N-gram counts and language models from the common crawl...","url":["https://arxiv.org/pdf/1711.02604"]}
{"year":"2017","title":"Understanding and Predicting the Usefulness of Yelp Reviews","authors":["DZ Liu"],"snippet":"... I concatenate output from both RNNs to make the final prediction. (figure 1) [1] https://www.yelp. com/dataset_challenge [2] Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors): glove.42B.300d.zip from http://nlp.stanford.edu/projects/glove/ Page 4. ...","url":["https://web.stanford.edu/class/cs224n/reports/2760995.pdf"]}
{"year":"2017","title":"Understanding Regional Context of World Wide Web using Common Crawl Corpus","authors":["MA Mehmood, HM Shafiq, A Waheed"],"snippet":"AbstractThe World Wide Web has emerged as the most important and essential tool for the society. Today, people heavily rely on rich resources available in the web for communication, business, maps, and social networking etc. In addition, people seek web","url":["https://www.researchgate.net/profile/Amir_Mehmood/publication/321489200_Understanding_Regional_Context_of_World_Wide_Web_using_Common_Crawl_Corpus/links/5a251abaaca2727dd87e780a/Understanding-Regional-Context-of-World-Wide-Web-using-Common-Crawl-Corpus.pdf"]}
{"year":"2017","title":"Understanding Spreadsheet Evolution in Practice","authors":["L Xu - Software Maintenance and Evolution (ICSME), 2017 …, 2017"],"snippet":"IEEE International Conference on Software Engineering (ICSE), 2015, pp. 716. [28] “Common crawl data on AWS.” [Online]. Available: http://aws.amazon.com/datasets/ 41740. [29] C. Chambers, M. Erwig, and M. Luckey, “SheetDiff","url":["http://ieeexplore.ieee.org/abstract/document/8094479/"]}
{"year":"2017","title":"Unsupervised Neural Machine Translation","authors":["M Artetxe, G Labaka, E Agirre, K Cho - arXiv preprint arXiv:1710.11041, 2017"],"snippet":"... For that purpose, we used the combination of all parallel corpora provided at WMT 2014, which comprise Europarl, Common Crawl and News Commentary for both language pairs plus the UN and the Gigaword corpus for FrenchEnglish. ...","url":["https://arxiv.org/pdf/1710.11041"]}
{"year":"2017","title":"Using Distributional Semantics for Automatic Taxonomy Induction","authors":["B Zafar, M Cochez, U Qamar"],"snippet":"... system. They used general and domain specific corpora such as GigaWord, ukWac etc. and the common crawl to extract lexico-syntactic patterns. Additionally, they applied pruning methods to refine the generated taxonomy. ...","url":["http://users.jyu.fi/~miselico/papers/distributional-semantics-taxonomy.pdf"]}
{"year":"2017","title":"Using images to improve machine-translating e-commerce product listings","authors":["I Calixto, D Stein, E Matusov, P Lohar, S Castilho… - EACL 2017, 2017"],"snippet":"... Table 2 we show the number of running words as well as the perplexity scores obtained with LMs trained on three sets of different German corpora: the Multi30k, eBay's in-domain data and a concatenation of the WMT 20152 Europarl (Koehn, 2005), Common Crawl and News ...","url":["https://www.aclweb.org/anthology/E/E17/E17-2.pdf#page=669"]}
{"year":"2017","title":"Using Recurrent Neural Network to Predict The Usefulness of Yelp Reviews","authors":["DZ Liu, G Singh"],"snippet":"... The frequency of alternation is a hyper-parameter Figure 2: MTL RNN structure with detailed input and output description [2] Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors): glove.42B.300d.zip from http://nlp.stanford.edu/projects/glove/ Page 4. ...","url":["https://web.stanford.edu/class/cs221/2017/restricted/p-final/dzliu/final.pdf"]}
{"year":"2017","title":"UWat-Emote at EmoInt-2017: Emotion Intensity Detection using Affect Clues, Sentiment Polarity and Word Embeddings","authors":["V John, O Vechtomova - EMNLP 2017, 2017"],"snippet":"... GloVe Model-Tweets (GV-T), Wikipedia + Gigaword (GV-WG), Common Crawl 42B tokens (GV-CC1), Common Crawl 840B tokens (GV-CC2): GloVe is similar to Word2Vec, in that it obtains dense vector representations of words. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-52.pdf#page=265"]}
{"year":"2017","title":"Variable length word encodings for neural translation models","authors":["J Gao"],"snippet":"Page 1. Variable length word encodings for neural translation models Jiameng Gao Department of Engineering University of Cambridge This dissertation is submitted for the degree of Master of Philosophy Peterhouse August 11, 2016 Page 2. Page 3. Page 4. Page 5. ...","url":["http://www.mlsalt.eng.cam.ac.uk/foswiki/pub/Main/CurrentMPhils/Jiameng_Gao_8224881_assignsubmission_file_J_Gao_MPhil_dissertation.pdf"]}
{"year":"2017","title":"VecShare: A Framework for Sharing Word Representation Vectors","authors":["J Fernandez, Z Yu, D Downey"],"snippet":"... we utilize three sets of GloVe embeddings (Pennington et al., 2014): wik+, 100-dimensional embeddings trained on six billion tokens of Wikipedia and the Gigaword corpus; web, 300-dimensional embeddings trained on 42 billion tokens of the Common Crawl Web dataset ...","url":["http://www.cs.northwestern.edu/~ddowney/publications/vecshare_fernandez_2017.pdf"]}
{"year":"2017","title":"Vector Space Representations in Information Retrieval","authors":["V Novotný"],"snippet":"Page 1. Masaryk University Faculty of Informatics Vector Space Representations in Information Retrieval Master's Thesis Vít Novotný Brno, Fall 2017 Page 2. Page 3. Masaryk University Faculty of Informatics Vector Space Representations in Information Retrieval Master's Thesis …","url":["https://is.muni.cz/th/409729/fi_m/main.pdf"]}
{"year":"2017","title":"Visual Exploration of High-Dimensional Spaces Through Identification, Summarization, and Interpretation of Two-Dimensional Projections","authors":["S Liu - 2017"],"snippet":"Visual Exploration of High-Dimensional Spaces Through Identification, Summarization, and Interpretation of Two-Dimensional Projections. Abstract. With the ever-increasing amount of available computing resources and sensing ...","url":["http://search.proquest.com/openview/521292ce267e4e2b78aa24b8452c5a8d/1?pq-origsite=gscholar&cbl=18750&diss=y"]}
{"year":"2017","title":"Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation","authors":["JB Delbrouck, S Dupont, O Seddati - arXiv preprint arXiv:1707.01009, 2017"],"snippet":"... to. As previously mentioned, the textual representation lwi is obtained with the word embeddings algorithm Glove. We use the pre-trained model on the Common Crawl corpus consisting of 840B tokens and a 2.2M words. The ...","url":["https://arxiv.org/pdf/1707.01009"]}
{"year":"2017","title":"Web-scale profiling of semantic annotations in HTML pages","authors":["R Meusel - 2017"],"snippet":"... In 2012, the Common Crawl Foundation (CC)8 started to continuously release crawled web corpora of a decent size and made them publicly available. Each of the corpora contains several tera-bytes of compressed HTML pages. ...","url":["https://ub-madoc.bib.uni-mannheim.de/41884/1/thesis_final_rm_20170322-1.pdf"]}
{"year":"2017","title":"Web-Scale Web Table to Knowledge Base Matching","authors":["D Ritze - 2017"],"snippet":"… 43 4.2.1 Common Crawl … Page 20. 12 CHAPTER 1. INTRODUCTION 1.4 Published Work Parts of the work presented in this thesis have been published previously: • The extraction of the WDC Web Table Corpus from the Common Crawl …","url":["https://ub-madoc.bib.uni-mannheim.de/43123/1/thesis.pdf"]}
{"year":"2017","title":"What's good for the goose is good for the GANder","authors":["C Hung, B Corcoran"],"snippet":"... To reduce the percentage of un- known words, we additionally brought down the size of our vocabulary to contain only the 10k most commonly used words in the training set; and used GloVe vectors (Pennington et al., 2014), pretrained on Common Crawl (having around 42B ...","url":["https://web.stanford.edu/class/cs224n/reports/2761035.pdf"]}
{"year":"2017","title":"Word Embeddings for Practical Information Retrieval","authors":["L Galke, A Saleh, A Scherp - INFORMATIK 2017, 2017"],"snippet":"... 2 zbw.eu/stw 3 A dataset of crawled web data from https://commoncrawl.org/ Word Embeddings for Similarity Scoring in Practical Information Retrieval 2161 Page 8. i “proceedings” — 2017/8/2412:20 — page 2162 — #2162 i i i ...","url":["https://dl.gi.de/bitstream/handle/20.500.12116/3987/B29-2.pdf?sequence=1"]}
{"year":"2017","title":"Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes","authors":["N Garg, L Schiebinger, D Jurafsky, J Zou - arXiv preprint arXiv:1711.08412, 2017"],"snippet":"… nearly identical correlation. We further validate this association using different embeddings trained on Wikipedia and Common Crawl texts instead of Google News; see Appendix Section B.1 for details. Google News embedding …","url":["https://arxiv.org/pdf/1711.08412"]}
{"year":"2017","title":"Word Re-Embedding via Manifold Dimensionality Retention","authors":["S Hasan, E Curry - Proceedings of the 2017 Conference on Empirical …, 2017"],"snippet":"... Original Embedding Spaces. The original word embeddings used are pre-trained GloVe models: Wikipedia 2014 + Gigaword 5 (6B tokens, 400K vocab, 50d, 100d, 200d, & 300d vectors), and Common Crawl (42B tokens, 1.9M vocab, 300d vectors) (Pennington et al., 2014b). ...","url":["http://www.aclweb.org/anthology/D17-1033"]}
{"year":"2017","title":"Word vectors, reuse, and replicability: Towards a community repository of large-text resources","authors":["M Fares, A Kutuzov, S Oepen, E Velldal"],"snippet":"... Moreover, with an ac- curacy of 83.08 for the semantic analogies, the GloVe model trained on the lemmatized version of Wikipedia outperforms the GloVe model trained on 42 billion tokens of web data from the Common Crawl reported in (Pennington et al., 2014), which at an ...","url":["http://www.ep.liu.se/ecp/131/037/ecp17131037.pdf"]}