|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:,:[]} |
|
{:,:,:[],:web as a corpus\wacky\ Lang Resources & Evaluation 2017 51 9See http://commoncrawl.org/the ...urlhttps://www.researchgate.net/profile/Marina_Santini/publication/318379265_A_Web_Corpus_for_eCare_Collection_Lay_Annotation_and_Learning_-First_Results-/links/596650de0f7e9b80917fea3e/A-Web-Corpus-for-eCare-Collection-Lay-Annotation-and-Learning-First-Results.pdf |
|
year2017titleA Web Page Distillation Strategy for Efficient Focused Crawling Based on Optimized Naïve Bayes (ONB) ClassifierauthorsAI Saleh, AE Abulwafa, MF Al Rahmawy - Applied Soft Computing, 2017snippetThe target of a focused crawler (FC) is to retrieve pages related to a specific domain of interest (DOI). However, FCs may be hasted if bad links were injected.urlhttp://www.sciencedirect.com/science/article/pii/S1568494616306536 |
|
year2017titleAbstract Meaning Representation Parsing using LSTM Recurrent Neural NetworksauthorsM HutchinsonW Foland, JH Martin - Proceedings of the 55th Annual Meeting of the …, 2017snippet... The use of distributed word representations generated from large text corpora is pervasive in modern NLP. We start with 300 dimension GloVe representations (Pennington et al., 2014) trained on the 840 billion word common crawl (Smith et al., 2013). ...urlhttp://www.aclweb.org/anthology/P17-1043https://zdoc.pub/abstract-meaning-representation-parsing-using-lstm-recurrent.html |
|
year2017titleAccelerating Innovation Through Analogy MiningauthorsT Hope, J Chan, A Kittur, D Shahaf - arXiv preprint arXiv:1706.05585, 2017snippet... In more formal terms, let wi = (w1 i ,w2 i ,...,wT i) be the sequence of GloVe [27] word vectors (pre-trained on Common Crawl web data), representing (x1 i ,x2 i ,...,xT i ). We select all xi word vectors for which ˜p j ik = 1(˜m j ik = 1) for some k, and concatenate them into one ...urlhttps://arxiv.org/pdf/1706.05585 |
|
year2017titleAccurate Sentence Matching with Hybrid Siamese NetworksauthorsM Nicosia, A Moschitti - Proceedings of the 2017 ACM on Conference on …, 2017snippet… Their training split contains 384,348 pairs, and the balanced development and test sets contain 10,000 pairs each. The embeddings are a subset of the 300-dimensional GloVe word vectors pretrained on the Common Crawl corpus, 3 covering the Quora dataset vocabulary …urlhttp://dl.acm.org/citation.cfm?id=3133156 |
|
year2017titleAcquiring Common Sense Spatial Knowledge through Implicit Spatial TemplatesauthorsG Collell, L Van Gool, MF Moens - arXiv preprint arXiv:1711.06821, 2017snippet… 4.5 Word embeddings We use 300-dimensional GloVe word embeddings (Pennington, Socher, and Manning 2014) pre-trained on the Common Crawl corpus (consisting of 840B-tokens), which we obtain from the authors' website.8 …","url":["https://arxiv.org/pdf/1711.06821"]} |
|
{"year":"2017","title":"Adaptation and Combination of NMT Systems: The KIT Translation Systems for IWSLT 2016","authors":["E Cho, J Niehues, TL Ha, M Sperber, M Mediani… - Proceedings of the 13th …, 2016"],"snippet":"... to 1.0. We use a beam search for decoding, with the beam size of 12. The baseline systems were trained on the WMT parallel data. For both languages, this consists of the EPPS, NC, CommonCrawl corpus. In addition, we ...","url":["http://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_17.pdf"]} |
|
{"year":"2017","title":"Adapting Sequence Models for Sentence Correction","authors":["A Schmaltz, Y Kim, AM Rush, SM Shieber - arXiv preprint arXiv:1707.09067, 2017"],"snippet":"... provided access to SRILM (Stolcke, 2002) for running Junczys-Dowmunt and Grundkiewicz (2016) 7We found that including the features and data associated with the large language models of Junczys-Dowmunt and Grundkiewicz (2016), created from Common Crawl text ...","url":["https://arxiv.org/pdf/1707.09067"]} |
|
{"year":"2017","title":"Adversarial Training for Cross-Domain Universal Dependency Parsing","authors":["M Sato, H Manabe, H Noji, Y Matsumoto"],"snippet":"... initialized POS tag embeddings. For the model 2The pre-trained word embeddings are provided by the CoNLL 2017 Shared Task organizers. These are trained with CommonCrawl and Wikipedia. with adversarial training, we ...","url":["http://universaldependencies.org/conll17/proceedings/pdf/K17-3007.pdf"]} |
|
{"year":"2017","title":"Agree to Disagree: Improving Disagreement Detection with Dual GRUs","authors":["S Hiray, V Duppada"],"snippet":"... NLP tasks [24] [25]. For this task of (dis)agreement classification, we use GloVe embeddings of 300 dimensions trained on Common Crawl with 840 billion tokens, 2.2 million vocabulary. Page 3. 4.2. Lexicons We used affect, sentiment ...","url":["https://www.deepaffects.com/s/agree-to-disagree.pdf"]} |
|
{"year":"2017","title":"All-but-the-Top: Simple and Effective Postprocessing for Word Representations","authors":["J Mu, S Bhat, P Viswanath - arXiv preprint arXiv:1702.01417, 2017"],"snippet":"... We test our observations on various word representations: four publicly available word representations (WORD2VEC1 (Mikolov et al., 2013) trained using Google News, GLOVE2 (Pennington et al., 2014) trained using Common Crawl, RAND-WALK (Arora et al., 2016 ...","url":["https://arxiv.org/pdf/1702.01417"]} |
|
{"year":"2017","title":"An Empirical Analysis of NMT-Derived Interlingual Embeddings and their Use in Parallel Sentence Identification","authors":["C España-Bonet, ÁC Varga, A Barrón-Cedeño… - arXiv preprint arXiv: …, 2017","J van Genabith, A Barron-Cedeno, C España-Bonet…"],"snippet":"... context vectors. The parallel corpus includes data from United Na- tions (Rafalovitch and Dale, 2009), Common Crawl2, News Commentary3 and IWSLT4. We train system S1-w after cleaning and tokenising the texts. We ...","url":["https://arxiv.org/pdf/1704.05415","https://deepai.org/publication/an-empirical-analysis-of-nmt-derived-interlingual-embeddings-and-their-use-in-parallel-sentence-identification"]} |
|
{"year":"2017","title":"An End-to-End Neural Architecture for Reading Comprehension","authors":["M Burkle, M Camacho, N Danyliw"],"snippet":"... referencing a fixed sized vocabulary using GloVe word embeddings from the Wikipedia 6B word dataset or the CommonCrawl 840B word ... Wikipedia corpus of ∼6 billion words, we moved to the 300-dimensional word embeddings trained on the Common Crawl vocabulary of ...","url":["http://web.stanford.edu/class/cs224n/reports/2761845.pdf"]} |
|
{"year":"2017","title":"An In-Depth Experimental Comparison of RNTNs and CNNs for Sentence Modeling","authors":["Z Ahmadi, M Skowron, A Stier, S Kramer"],"snippet":"... On other datasets, we use the model trained on the web data from Common Crawl which contains a case-sensitive vocabulary of size 2.2 million. Experiments show that RNTNs work best when the word vector dimension is set between 25 and 35 [11]. ...","url":["http://www.ofai.at/~marcin.skowron/papers/DS2017.pdf"]} |
|
{"year":"2017","title":"An overview of Lithuanian Internet media n-gram corpus","authors":["I Bumbuliene, L Boizou, J Mandravickaite, T Krilavicius - 2017"],"snippet":"... 19(1), pp. 61-93, 2013. [2] C. Buck, K. Heafield, B. Van Ooyen, “N-gram counts and language models from the common crawl,” in LREC, vol. 2. Citeseer, p. 4, 2014. [3] A. Pauls, D. Klein, “Faster and smaller n-gram language models,” in Proc. ...","url":["http://ceur-ws.org/Vol-1853/p05.pdf"]} |
|
{"year":"2017","title":"Analogy Mining for Specific Design Needs","authors":["K Gilon, FY Ng, J Chan, HL Assaf, A Kittur, D Shahaf - arXiv preprint arXiv …, 2017"],"snippet":"… We use Glove pre-trained on the Common Crawl dataset (840B tokens, 300d vectors)1. We then normalize each document vector, and calculate cosine similarity (which is the same as Euclidean distance in this case) between the resulting vectors for each seed and all other …","url":["https://arxiv.org/pdf/1712.06880"]} |
|
{"year":"2017","title":"Analysing and Improving embedded Markup of Learning Resources on the Web","authors":["S Dietze, D Taibi, R Yu, P Barker, M d'Aquin - 2017snippet... 6 http://commoncrawl.org/ 7 http://grouper.ieee.org/groups/ltsc/wg12/20020612-Final-LOMDraft.html 8 https://www.imsglobal.org ... The Web Data Commons [1], a recent initiative investigating the Common Crawl, ie a Web crawl of approximately 2 billion HTML pages from over ...urlhttps://www.researchgate.net/profile/Stefan_Dietze/publication/313964715_Analysing_and_Improving_embedded_Markup_of_Learning_Resources_on_the_Web/links/58b05d1545851503be97ddfc/Analysing-and-Improving-embedded-Markup-of-Learning-Resources-on-the-Web.pdf |
|
year2017titleAnalysis of semantic URLs to support automated linking of structured data on the webauthorsS Lynden - Proceedings of the 7th International Conference on …, 2017snippet... The Web Data Commons [13] effort to study the evolution of structured data on the web analyse the Common Crawl Web Corpus annually, most recently finding that about 38% of web pages contain some form of structured data. ...urlhttp://dl.acm.org/citation.cfm?id=3102265 |
|
year2017titleAnalyzing Movie Reviews SentimentauthorsD Sarkar, R Bali, T Sharma - Practical Machine Learning with Python, 2018snippetIn this chapter, we continue with our focus on case-study oriented chapters, where we will focus on specific real-world problems and scenarios and how we can use Machine Learning to solve them. We wil.urlhttps://link.springer.com/chapter/10.1007/978-1-4842-3207-1_7 |
|
year2017titleAnalyzing Neural MT Search and Model PerformanceauthorsJ Niehues, E Cho, TL Ha, A Waibel - ACL 2017, 2017snippet... For the single models, we apply the early stopping based on the validation score. The baseline system is trained on the WMT parallel data, namely EPPS, NC, CommonCrawl and TED corpus. As validation data we used the newstest13 set from IWSLT evaluation campaign. ...urlhttp://www.aclweb.org/anthology/W/W17/W17-32.pdf#page=23 |
|
year2017titleAnalyzing the compositional properties of word embeddingsauthorsT Scheepers, E Gavves, E Kanoulassnippet... 3For GloVe we used the representations from the Common Crawl which has 840B tokens and a vocabulary of 2.2M. ... trained on used news data, where fastText and GloVe use more definitional data, Wikipedia and Common Crawl respectively. ...urlhttps://thijs.ai/papers/scheepers-gavves-kanoulas-analyzing-compositional-properties.pdf |
|
year2017titleAny-gram Kernels for Sentence Classification: A Sentiment Analysis Case StudyauthorsR Kaljahi, J Foster - arXiv preprint arXiv:1712.07004, 2017snippet… We use cosine similarity for word embedding similarities and the GloVe (Pennington et al., 2014) Common Crawl (1.9M vocabulary) word embeddings with a dimensionality of 300.6 … (2014). The GloVe Common Crawl vectors, however, performed better. Page 7 …urlhttps://arxiv.org/pdf/1712.07004 |
|
year2017titleAraVec: A set of Arabic Word Embedding Models for use in Arabic NLPauthorsAB Soliman, K Eissa, SR El-Beltagy - Linguistics, 2017snippet... Here it is important to note that the Common Crawl project does not provide any technique for identifying or selecting the language of web pages to ... 5 http://www.internetworldstats.com/stats19. htm 6 http://www.internetworldstats.com/stats5.htm 7 http://commoncrawl.org 8 https ...urlhttps://www.researchgate.net/profile/Samhaa_El-Beltagy2/publication/319880027_AraVec_A_set_of_Arabic_Word_Embedding_Models_for_use_in_Arabic_NLP/links/59bfef730f7e9b48a29ba3a8/AraVec-A-set-of-Arabic-Word-Embedding-Models-for-use-in-Arabic-NLP.pdf |
|
year2017titleArchitecting for Performance Clarity in Data Analytics FrameworksauthorsK Ousterhout - 2017snippetPage 1. Architecting for Performance Clarity in Data Analytics Frameworks Kay Ousterhout Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2017-158 http://www2 ...urlhttps://www2.eecs.berkeley.edu/Pubs/TechRpts/2017/EECS-2017-158.pdf |
|
year2017titleArchival Crawlers and JavaScript: Discover More Stuff but Crawl More SlowlyauthorsJF Brunelle, MC Weigle, ML Nelson - Digital Libraries (JCDL), 2017 ACM/IEEE Joint …, 2017snippet... If our method was applied to the July 2015 Common Crawl dataset, a web-scale archival crawler will discover an additional 7.17 PB (5.12 times more) of information per year. This illustrates the significant increase in resources necessary for more thorough archival crawls. ...urlhttp://ieeexplore.ieee.org/abstract/document/7991554/ |
|
year2017titleAre distributional representations ready for the real world? Evaluating word vectors for grounded perceptual meaningauthorsL Lucy, J Gauthier - arXiv preprint arXiv:1705.11168, 2017snippet... attributive features. Collell and Moens (2016) find that word representations fail to pre- # word tokens # word types GloVe (Common Crawl) 840B 2.2M GloVe (Wiki+Gigaword) 6B 400K word2vec 100B 3M Table 1: Statistics ...urlhttps://arxiv.org/pdf/1705.11168 |
|
year2017titleAssessing Convincingness of Arguments in Online Debates with Limited Number of FeaturesauthorsLA Chalaguine, C Schulzsnippet... 6http://nlp.stanford.edu/projects/glove/ 7http://commoncrawl.org/ 8because including stems, lemmas or both had no impact on the results we included stems only in our “top feature set” because they are less expensive to compute 80 Page 7. ture resulted in 66% in our case. ...urlhttps://www.aclweb.org/anthology/E/E17/E17-4008.pdf |
|
year2017titleATOL: A Framework for Automated Analysis and Categorization of the Darkweb EcosystemauthorsSGPPV Yegneswaran, KNA Das - 2017snippet... 2016), and an open repository of (non-onion) Web crawling data, called Common Crawl (Common Crawl Foundation 2016). Using these data sources as starting points, we developed tools to acquire additional onion addresses both from the onion Web and the open Web. ...urlhttp://www.csl.sri.com/users/shalini/atol_aics17_cameraready.pdf |
|
year2017titleAttention-based Dialog Embedding for Dialog Breakdown DetectionauthorsC Park, K Kim, S Kimsnippet… sentence. We used GloVe vectors of dimension 100 trained by the Twitter data. We used one from Twitter data rather than Common Crawl data be- cause it is more closely related to the general chat domain of our task. After …urlhttp://workshop.colips.org/dstc6/papers/track3_paper14_park.pdf |
|
year2017titleAttributes2Classname: A discriminative model for attribute-based unsupervised zero-shot learningauthorsB Demirel, RG Cinbis, NI Cinbis - arXiv preprint arXiv:1705.01734, 2017snippet... For each class and attribute name, we generate a 300-dimensional word embedding vector using GloVe [26] based on Common Crawl Data2 ... 2http://commoncrawl.org/the-data/ 3http://nlp.stanford. edu/projects/glove/ 4We will release our code and models upon publication. ...urlhttps://arxiv.org/pdf/1705.01734 |
|
year2017titleAutomated Categorization of Onion Sites for Analyzing the Darkweb EcosystemauthorsS Ghosh, A Das, P Porras, V Yegneswaran, A Gehani - 2017snippet... Our sources of seed data include various published onion datasets( [32], [5], [25], [22]), .onion references from a large collection of recursive DNS resolvers [17], and an open repository of (non-onion) web crawling data, called Common Crawl [11]. ...urlhttp://www.csl.sri.com/users/gehani/papers/KDD-2017.Onions.pdf |
|
year2017titleAutomatic Learning Content Sequence via Linked Open DataauthorsR Manriquesnippet... in [14, 13]. We also plan to use a recent release dataset [5] that contains all embedded Learning Resource Metadata Initiative (LRMI)4 markup statements extracted from the Common Crawl releases 2013-2015. Each entity description ...urlhttps://iswc2017.ai.wu.ac.at/wp-content/uploads/papers/DC/paper_19.pdf |
|
year2017titleAutomatic Threshold Detection for Data Selection in Machine TranslationauthorsMS Duma, W Menzel - WMT 2017, 2017snippet... The in-domain corpora were made available by the competition and the general domain corpora we have chosen to select data from are the Wikipedia corpora (Wolk and Marasek, 2014) and the Commoncrawl corpora1. Experiments 1http://commoncrawl. org/ 483 Page 508. ...urlhttp://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=507 |
|
year2017titleBetter Text Understanding Through Image-To-Text TransferauthorsK Kurach, S Gelly, M Jastrzebski, P Haeusser… - arXiv preprint arXiv: …, 2017snippet... We used word embeddings obtained from three methods: • Glove: embeddings proposed in [20], trained on a Common Crawl dataset with 840 billion tokens. • M-Skip-Gram: embeddings proposed in [12], trained on Wikipedia and a set of images from ImageNet. ...urlhttps://arxiv.org/pdf/1705.08386 |
|
year2017titleBiasing Attention-Based Recurrent Neural Networks Using External Alignment InformationauthorsT Alkhouli, H Neysnippet... (1). We use the full bilingual data of the English→Romanian task. For the German→English task, we choose the common crawl, news commentary and European parliament bilingual data. ... This is to remove noisy sentence pairs that are frequent in the common crawl corpus. ...urlhttps://www-i6.informatik.rwth-aachen.de/publications/download/1036/Alkhouli-WMT%202017-2017.pdf |
|
year2017titleBig DataauthorsR WomacksnippetPage 1. Topics in Data Science / Өгөгдлийн шинжлэх ухаан Rutgers University has made this article freely available. Please share how this access benefits you. Your story matters. [https://rucore.libraries.rutgers.edu/rutgers-lib/52378/story/] …urlhttps://rucore.libraries.rutgers.edu/rutgers-lib/52378/PDF/1/play/ |
|
year2017titleBig Data: A Very Short IntroductionauthorsDE Holmes - 2017 |
|
year2017titleBilateral Multi-Perspective Matching for Natural Language SentencesauthorsZ Wang, W Hamza, R Florian - arXiv preprint arXiv:1702.03814, 2017snippet... 4.5. 4.1 Experiment Settings We initialize the word embeddings in the word representation layer with the 300-dimensional GloVe word vectors pretrained from the 840B Common Crawl corpus [Pennington et al., 2014]. For ...urlhttps://arxiv.org/pdf/1702.03814 |
|
year2017titleBilingual Word Embeddings for Bilingual Terminology Extraction from Specialized Comparable CorporaauthorsA Hazem, E Morin - Proceedings of the Eighth International Joint …, 2017snippet… and economic commentary crawled from the web (NC), Europarl corpus is a parallel corpus extracted from the proceedings of the European Parliament (EP7), JRC acquis corpus is a collection of legislative European Union documents (JRC) and Common Crawl corpus (CC …urlhttp://www.aclweb.org/anthology/I17-1069 |
|
year2017titleBLEU2VEC: the Painfully Familiar Metric on Continuous Vector Space SteroidsauthorsA Tättar, M Fishel - WMT 2017, 2017snippet... modifications. data from the WMT'2017 news translation shared task: we took a random 50 million sentences from the News Crawl corpora for each language (ex- cept Chinese, where we used a portion of Common Crawl). While ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=643"]} |
|
{"year":"2017","title":"Bloom Filters for ReduceBy, GroupBy and Join in Thrill","authors":["A Noe, DIDMT Bingmann - 2017"],"snippet":"Page 1. Master thesis Bloom Filters for ReduceBy, GroupBy and Join in Thrill Alexander Noe Date: 12. January 2017 Supervisors: Prof. Dr. Peter Sanders Dipl. Inform. Dipl. Math. Timo Bingmann Institute of Theoretical Informatics, Algorithmics Department of Informatics ...","url":["https://pdfs.semanticscholar.org/bf34/8f2819a740ba3a473314b4eab616c421c9e1.pdf"]} |
|
{"year":"2017","title":"Bootstrapping Chatbots for Novel Domains","authors":["P Babkin, MFM Chowdhury, A Gliozzo, M Hirzel… - Workshop at NIPS on …, 2017"],"snippet":"… between the corresponding dense vectors. We used 300-dimensional vectors pre-trained with the GloVe algorithm [19] on the Common Crawl corpus that come with the gensim Python library [20]. By applying this model, for …","url":["https://www.researchgate.net/profile/Avraham_Shinnar/publication/321664993_Bootstrapping_Chatbots_for_Novel_Domains/links/5a29ff24a6fdccfbbf81994a/Bootstrapping-Chatbots-for-Novel-Domains.pdf"]} |
|
{"year":"2017","title":"Building a Web-Scale Dependency-Parsed Corpus from CommonCrawl","authors":["A Panchenko, E Ruppert, S Faralli, SP Ponzetto… - arXiv preprint arXiv: …, 2017"],"snippet":"Abstract: We present DepCC, the largest to date linguistically analyzed corpus in English including 365 million documents, composed of 252 billion tokens and 7.5 billion of named entity occurrences in 14.3 billion sentences from a web-scale crawl of the CommonCrawl","url":["https://arxiv.org/pdf/1710.01779"]} |
|
{"year":"2017","title":"Building Lexical Vector Representations from Concept Definitions","authors":["DS Carvalho, M Le Nguyen"],"snippet":"... This parameter was adjusted using the training set for MEN, or inside each CV fold for the rest. • Both Word2Vec and GloVe were used with pre-trained, 300-dimensional models: 100 billion words GoogleNews corpus and Common Crawl 42 billion token corpus respectively. ...","url":["http://www.aclweb.org/anthology/E/E17/E17-1085.pdf"]} |
|
{"year":"2017","title":"Byte-based Neural Machine Translation","authors":["MR Costa-jussà, C Escolano, JAR Fonollosa - Proceedings of the First Workshop on …, 2017"],"snippet":"... English. For the three language pairs, we used all data parallel data provided in the evaluation. For German-English, we used: europarl v.7, news commentary v.12, common crawl and rapid corpus of EU press re- leases. For ...","url":["http://www.aclweb.org/anthology/W17-4123"]} |
|
{"year":"2017","title":"Can word vectors help corpus linguists?","authors":["G Desagulier - 2017"],"snippet":"… If we follow the distributional hypothesis, this means that the words have similar meanings. The quality of the vector representation is a function of the number 2It is sampled from a matrix of vectors obtained with GloVe (see below) on the basis of the Common Crawl dataset. 6 …","url":["https://halshs.archives-ouvertes.fr/halshs-01657591/document"]} |
|
{"year":"2017","title":"Characterisation of mental health conditions in social media using Informed Deep Learning","authors":["G Gkotsis, A Oellrich, S Velupillai, M Liakata… - Scientific Reports, 2017"],"snippet":"... We considered pre-trained word vectors as input to the classifiers (eg using Glove's Common Crawl containing 840 Billion tokens 17 ), but the results did not improve. We attribute this to the size of our dataset which is adequate for representing the language within the corpus. ...urlhttp://www.nature.com/srep/2017/170322/srep45141/full/srep45141.html |
|
year2017titleClassification of keywordsauthorsI Prémont-schwarz, A Thakur, M Tober - US Patent 9,798,820, 2017snippet… The resource contents module 320 may automatically acquire a plurality of resources. The resources may, for example, be Wikipedia articles and be acquired from Wikipedia.org, or an open repository of web crawl data such as CommonCrawl.org …urlhttp://www.freepatentsonline.com/9798820.html |
|
year2017titleClassification of search queriesauthorsA Thakur, M Tober - US Patent 9,767,182, 2017snippet... The resource contents module 320 may automatically acquire a plurality of resources. The resources may, for example, be Wikipedia articles and be acquired from Wikipedia.org, or an open repository of web crawl data such as CommonCrawl.org. ...urlhttp://www.freepatentsonline.com/9767182.html |
|
year2017titleClassifier Stacking for Native Language IdentificationauthorsW Li, L Zou - Bronze Sponsors, 2017snippet... Word embeddings We use the Common Crawl (42B tokens, 1.9 M vocab, uncased, 300d vectors) in GloVe (global vectors for word representation)(Pennington et al., 2014) to produce feature vectors for each essay, with the help of Gensim (ˇRehurek and Sojka, 2010). ...urlhttp://www.aclweb.org/anthology/W/W17/W17-50.pdf#page=410 |
|
year2017titleClassifying Phishing URLs Using Recurrent Neural NetworksauthorsAC Bahnsen, EC Bohorquez, S Villegas, J Vargas…snippet... Sections 1PhishTank (https://www.phishtank.com/) 2Common Crawl (http://commoncrawl.org/) 978-1-5386-2701-3/17/$31.00 c 2017 IEEE Page 2. ... Half of them legitimate and half of them phishing. The legitimate URLs came from Common Crawl, a corpus of web crawl data. ...urlhttp://albahnsen.com/files/Classifying%20Phishing%20URLs%20Using%20Recurrent%20Neural%20Networks_cameraready.pdf |
|
year2017titleCloud Computing Infrastructure for Data Intensive ApplicationsauthorsY Demchenko, F Turkmen, C de Laat, CH Hsu… - Big Data Analytics for …, 2017snippetThis chapter describes the general architecture and functional components of the cloud-based big data infrastructure (BDI). The chapter starts with the analysis.urlhttps://www.sciencedirect.com/science/article/pii/B9780128093931000027 |
|
year2017titleCoattention-Based Neural Network for Question AnsweringauthorsJ Andress, C Zanocisnippet... We then proceed by embedding each word using the GloVe word vectors pretrained on the 840B Common Crawl corpus [6]. We found that switching from the default 100dimensional GloVe vectors to the larger 300-dimensional representation improved the performance ...urlhttps://web.stanford.edu/class/cs224n/reports/2762015.pdf |
|
year2017titleCommon Crawl MiningauthorsT Dean, A Pasha, B Clarke, CJ Butenhoff - 2017snippetThe main goal behind the Common Crawl Mining system is to improve Eastman Chemical Company's ability to use timely knowledge of public concerns to inform key business decisions. It provides information to Eastman Chemical Company that is valuable for","url":["https://vtechworks.lib.vt.edu/bitstream/handle/10919/77629/ccm_source_code.zip?sequence=5&isAllowed=y"]} |
|
{"year":"2017","title":"Common Crawled Web Corpora: Constructing corpora from large amounts of web data","authors":["KB Kristoffersen - 2017"],"snippet":"… Additionally, by using data provided by the Common Crawl Foundation, I develop a new very large English corpus with more than 135 billion tokens … 3 Exploring the Common Crawl 27 3.1 The data . . . . . 27 3.1.1 A note on scale …","url":["https://www.duo.uio.no/bitstream/handle/10852/57836/Kristoffersen_MSc2.pdf?sequence=5"]} |
|
{"year":"2017","title":"Composition of Compound Nouns Using Distributional Semantics","authors":["K Yee, J Kalita"],"snippet":"... word2vec 300 3,000,000 100.00 bn Google News GloVe 300 400,000 42.00 bn Common Crawl HPCA 200 178,080 1.65 bn enWiki+Reuters +WSJ CW 50 130,000 0.85 bn enWiki+Reuters RCV1 word2vec 500 30,025 100 mn BNC word2vec 500 19,679 120 mn esWiki ...","url":["http://www.cs.uccs.edu/~jkalita/papers/2016/KyraYeeICON2016.pdf"]} |
|
{"year":"2017","title":"Compressed Nonparametric Language Modelling","authors":["E Shareghi, G Haffari, T Cohn"],"snippet":"Page 1. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) 2701 Compressed Nonparametric Language Modelling Ehsan Shareghi,♣ Gholamreza Haffari,♣ Trevor Cohn♠ ♣ Faculty ...","url":["http://static.ijcai.org/proceedings-2017/0376.pdf"]} |
|
{"year":"2017","title":"COMPRESSING WORD EMBEDDINGS VIA DEEP COMPOSITIONAL CODE LEARNING","authors":["R Shu, H Nakayama - arXiv preprint arXiv:1711.01068, 2017","RSH Nakayama"],"snippet":"… purpose. We lowercase and tokenize all texts with the nltk package. We choose the 300-dimensional uncased GloVe word vectors (trained on 42B tokens of Common Crawl data) as our baseline embeddings. The vocabulary …","url":["https://arxiv.org/pdf/1711.01068","https://pdfs.semanticscholar.org/1713/d05f9d5861cac4d5ec73151667cb03a42bfc.pdf"]} |
|
{"year":"2017","title":"Compression with the tudocomp Framework","authors":["P Dinklage, J Fischer, D Köppl, M Löbel, K Sadakane - arXiv preprint arXiv: …, 2017"],"snippet":"Page 1. Compression with the tudocomp Framework Patrick Dinklage1, Johannes Fischer1, Dominik Köppl1, Marvin Löbel1, and Kunihiko Sadakane2 1 Department of Computer Science, TU Dortmund, Germany, pdinklag@gmail ...","url":["https://arxiv.org/pdf/1702.07577"]} |
|
{"year":"2017","title":"Concept/Theme Roll-Up","authors":["T Sahay, R Tadishetti, A Mehta, S Jadon - 2017"],"snippet":"... representation. For word embeddings, we used GloVe trained on a common crawl corpus, containing 1900000 words in its vocabulary. ... phrases. For words, the weights were initialized with GloVe embeddings trained on the common-crawl corpus. ...","url":["https://people.cs.umass.edu/~tsahay/lexalytics_report.pdf"]} |
|
{"year":"2017","title":"ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge","authors":["R Speer, J Lowry-Duda - arXiv preprint arXiv:1704.03560, 2017"],"snippet":"... The first source is the word2vec Google News embeddings2, and the second is the GloVe 1.2 embeddings that were trained on 840 billion tokens of the Common Crawl3. Because the input embeddings are only in En- glish, the vectors in other languages depended en- tirely on ...","url":["https://arxiv.org/pdf/1704.03560"]} |
|
{"year":"2017","title":"CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies","authors":["D Zeman, M Popel, M Straka, J Hajic, J Nivre, F Ginter… - Proceedings of the CoNLL …, 2017"],"snippet":"... Page 3. Raw texts The supporting raw data was gathered from CommonCrawl, which is a publicly available web crawl created and maintained by the non-profit CommonCrawl foundation.2 The data is publicly available in the Amazon cloud both as raw HTML and as plain text. ...","url":["http://www.aclweb.org/anthology/K17-3001"]} |
|
{"year":"2017","title":"Connecting the Dots: Towards Human-Level Grammatical Error Correction","authors":["S Chollampatt, HT Ng - Bronze Sponsors, 2017"],"snippet":"... Moreover, Junczys-Dowmunt and Grundkiewicz (2016) trained a web-scale language model (LM) using large corpora from the Common Crawl data (Buck et al., 2014). ... 2014. N-gram counts and language models from the Common Crawl...","url":["http://www.aclweb.org/anthology/W/W17/W17-50.pdf#page=347"]} |
|
{"year":"2017","title":"Constructing and Evaluating a Novel Crowdsourcing-based Paraphrased Opinion Spam Dataset","authors":["S Kim, S Lee, D Park, J Kang - Proceedings of the 26th International Conference on …, 2017"],"snippet":"... the past and future context of an input are captured (Figure 3). We initialized the input word representations of our LSTM model using publicly available 300dimensional GloVe10 vectors (Pennington et al., 2014), which are trained on 840 billion tokens of Common Crawl data. ...","url":["http://dl.acm.org/citation.cfm?id=3052607"]} |
|
{"year":"2017","title":"Context Similarity for Retrieval-Based Imputation","authors":["A Ahmadov, M Thiele, W Lehner, R Wrembel"],"snippet":"... an accurate imputation. We use Dresden Web Table Corpus (DWTC) which is comprised of more than 125 million web tables extracted from the Common Crawl as our knowledge source. The comprehensive experimental ...","url":["http://asonamdata.com/ASONAM2017_Proceedings/papers/165_1017_135.pdf"]} |
|
{"year":"2017","title":"CONTEXT-AWARE CLUSTERING USING GLOVE AND K-MEANS","authors":["P Juneja, H Jain, T Deshmukh, S Somani, BK Tripathy"],"snippet":"... Gigaword5 + Page 7. International Journal of Software Engineering & Applications (IJSEA), Vol.8, No.4, July 2017 27 Wikipedia2014 which has 6 billion tokens, and on 42 billion tokens of web data, from Common Crawl. For ...","url":["https://www.researchgate.net/profile/BK_Tripathy/publication/318815506_Context_Aware_Clustering_Using_Glove_and_K-Means/links/598037d2458515687b4f9dfd/Context-Aware-Clustering-Using-Glove-and-K-Means.pdf"]} |
|
{"year":"2017","title":"Continuous Learning from Human Post-Edits for Neural Machine Translation","authors":["M Turchi, M Negri, MA Farajian, M Federico - The Prague Bulletin of Mathematical …, 2017"],"snippet":"... For training the En_De NMT system, we merged the Europarl v7 (Koehn, 2005) and Common Crawl datasets released for the translation task at the 2016 Workshop on Statistical Machine Translation (WMT'16 (Bojar, 2016)) and random sampled 3.5 million sentence pairs. ...urlhttps://www.degruyter.com/downloadpdf/j/pralin.2017.108.issue-1/pralin-2017-0023/pralin-2017-0023.xml |
|
year2017titleConvolutional Encoding in Bidirectional Attention Flow for Question AnsweringauthorsDR Millersnippet... Language Processing (EMNLP), pp. 1532–1543, 2014. [9] “Common Crawl.” https://commoncrawl.org/. [10] RK Srivastava, K. Greff, and J. Schmidhuber, “Highway networks,” arXiv preprint arXiv:1505.00387, 2015. [11] P. Rajpurkar, J ...urlhttp://web.stanford.edu/class/cs224n/reports/2762032.pdf |
|
year2017titleCost Weighting for Neural Machine Translation Domain AdaptationauthorsB Chen, C Cherry, G Foster, S Larkin - ACL 2017, 2017snippet... which contains 3003 sentence pairs. The training data contain 12 million sentence pairs, composed of various sub-domains, such as news commentary, Europarl, UN, common crawl web data, etc. In the corpus weighting adaptation ...urlhttp://www.aclweb.org/anthology/W/W17/W17-32.pdf#page=52 |
|
year2017titleCounterfactual Learning for Machine Translation: Degeneracies and SolutionsauthorsC Lawrence, P Gajane, S Riezler - arXiv preprint arXiv:1711.08621, 2017snippet… signal. Experiments are conducted on two language pairs. The first is German-to-English and its baseline system is trained on the concatenation of the Europarl corpus, the Common Crawl corpus and the News corpus. The …urlhttps://arxiv.org/pdf/1711.08621 |
|
year2017titleCounterfactual learning from bandit feedback under deterministic logging: A case study in statistical machine translationauthorsC Lawrence, A Sokolov, S Riezler - arXiv preprint arXiv:1707.09118, 2017snippet... We conduct two SMT tasks with hypergraph re-decoding: The first is German-to-English and is trained using a concatenation of the Europarl corpus (Koehn, 2005), the Common Crawl corpus3 and the News Commentary corpus (Koehn and Schroeder, 2007). ...urlhttps://arxiv.org/pdf/1707.09118 |
|
year2017titleCritical review of various near-duplicate detection methods in web crawl and their prospective application in drug discoveryauthorsL Pamulaparty, CVG Rao, MS Rao - International Journal of Biomedical Engineering …, 2017snippet... Smith, JR, Saint-Amand, H., Plamada, M. and Lopez, A. (2013) 'Dirt cheap web-scale parallel text from the common crawl', Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), Association for Computational Linguistics, Sofia ...urlhttp://www.inderscienceonline.com/doi/abs/10.1504/IJBET.2017.087723 |
|
year2017titleCS 224N Assignment 4: Question Answering on SQuADauthorsRA Ozturk, HA Inan, K Garbesnippet... Fixing this problem partly by using all of the 400k words resulted in increased performance naturally, but there is still a performance gap between our dev performance and test performance. We should also note that 1 uses the Common Crawl dataset (2.2m words). ...urlhttps://web.stanford.edu/class/cs224n/reports/2761126.pdf |
|
year2017titleCS224N Project: Natural Language Inference for Quora DatasetauthorsKHK Yoo, MM Almajid, ZY Wongsnippet... Most of the results were obtained using the smaller vocabulary of 6 billion tokens obtained from Wikipedia and Gigaword 5, while there exists a Common Crawl version which has 840B tokens with an embedding size of 300. ...urlhttps://web.stanford.edu/class/cs224n/reports/2755939.pdf |
|
year2017titleCUNI System for the WMT17 Multimodal Translation TaskauthorsJ Helcl, J Libovický - arXiv preprint arXiv:1707.04550, 2017snippet... By scoring the German part of several parallel corpora (EU Bookshop (Skadinš et al., 2014), News Commentary (Tiedemann, 2012) and CommonCrawl (Smith et al., 2013)), we were only able to retrieve a few hundreds of in-domain sentences. ...urlhttps://arxiv.org/pdf/1707.04550 |
|
year2017titleD1. 1: Report on Building Translation Systems for Public Health DomainauthorsO Bojar, B Haddow, D Marecek, R Sudarikov… - 2017snippetPage 1. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 644402. D1.1: Report on Building Translation Systems for Public Health Domain ...","url":["http://www.himl.eu/files/D1.1-report-on-building-translation-systems.pdf"]} |
|
{"year":"2017","title":"Data Integration for Open Data on the Web","authors":["S Neumaier, A Polleres, S Steyskal, J Umbrich"],"snippet":"... However, some Web crawls have been made openly available, such as the Common Crawl corpus which contains “petabytes of data collected over the last 7 years”10. ... 10http://commoncrawl.org/, last accessed 30/03/2017 Page 5. Table 1: Top-10 portals, ordered by datasets. ...","url":["https://aic.ai.wu.ac.at/~polleres/publications/neum-etal-RW2017.pdf"]} |
|
{"year":"2017","title":"Data Selection Strategies for Multi-Domain Sentiment Analysis","authors":["S Ruder, P Ghaffari, JG Breslin - arXiv preprint arXiv:1702.02426, 2017"],"snippet":"... a linear SVM classifier (Blitzer et al., 2006). We use GloVe vectors (Pennington et al., 2014) pre-trained on 42B tokens of the Common Crawl corpus7 for our word embeddings. For the auto-encoder representations, we use a ...","url":["https://arxiv.org/pdf/1702.02426"]} |
|
{"year":"2017","title":"DCN+: Mixed Objective and Deep Residual Coattention for Question Answering","authors":["C Xiong, V Zhong, R Socher - arXiv preprint arXiv:1711.00106, 2017"],"snippet":"... Manning et al., 2014). For word embeddings, we use GloVe embeddings pretrained on the 840B Common Crawl corpus (Pennington et al., 2014) as well as character ngram embeddings by Hashimoto et al. (2017). In addition, we ...","url":["https://arxiv.org/pdf/1711.00106"]} |
|
{"year":"2017","title":"Deep Almond: A Deep Learning-based Virtual Assistant","authors":["GCR Ramesh"],"snippet":"... Preprocessing was implemented in Java using CoreNLP [22]. We use the pretrained GloVe [23] vectors of size 300 trained on Common Crawl as our word vectors, and we do not train the word vectors. 5.1 Model Validation & Tuning ...","url":["https://web.stanford.edu/class/cs224n/reports/2748325.pdf"]} |
|
{"year":"2017","title":"Deep Learning for User Comment Moderation","authors":["J Pavlopoulos, P Malakasiotis, I Androutsopoulos - arXiv preprint arXiv:1705.09993, 2017"],"snippet":"... 11See https://nlp.stanford.edu/projects/ glove/. We use 'Common Crawl' (840B tokens). 12For Gazzetta, words encountered only once in the training set (G-TRAIN-L or G-TRAIN-S) are also treated as OOV. ta : accept threshold tr : reject threshold 0.0 1.0 reject gray accept ...","url":["https://arxiv.org/pdf/1705.09993"]} |
|
{"year":"2017","title":"Deep Neural Machine Translation with Linear Associative Unit","authors":["M Wang, Z Lu, J Zhou, Q Liu - arXiv preprint arXiv:1705.00861, 2017","MWZLJ Zhou, Q Liu"],"snippet":"... translation are presented in Table 2. We compare our NMT systems with various other systems including the winning system in WMT14 (Buck et al., 2014), a phrase-based system whose language models were trained on a huge monolingual text, the Common Crawl corpus. ...","url":["http://www.aclweb.org/anthology/P/P17/P17-1013.pdf","https://arxiv.org/pdf/1705.00861"]} |
|
{"year":"2017","title":"Deeper Attention to Abusive User Content Moderation","authors":["J Pavlopoulos, P Malakasiotis, I Androutsopoulos - 2017"],"snippet":"... (2017a). 14We implemented the methods of this sub-section using Keras (keras.io) and TensorFlow (tensorflow.org). 15See https://nlp.stanford.edu/projects/ glove/. We use 'Common Crawl' (840B tokens). Page 6. ta : accept threshold tr : reject threshold 0.0 1.0 reject gray accept ...","url":["http://nlp.cs.aueb.gr/pubs/emnlp2017.pdf"]} |
|
{"year":"2017","title":"DeepSpace: Mood-Based Image Texture Generation for Virtual Reality from Music","authors":["M Sra, P Vijayaraghavan, O Rudovic, P Maes, D Roy - Computer Vision and Pattern …, 2017"],"snippet":"... task. We use the GloVe model trained on a common crawl dataset7 for the representation for words in the descriptive labels and mood. ... This approach of tractably modeling a joint distribution of 7http://commoncrawl.org/the-data/ pixels ...","url":["http://ieeexplore.ieee.org/abstract/document/8015017/"]} |
|
{"year":"2017","title":"Denoising Clinical Notes for Medical Literature Retrieval with Convolutional Neural Model","authors":["L Soldaini, A Yates, N Goharian - 2017"],"snippet":"... Two source of evidence were used to obtain, for each term qi , its word embedding xi : GloVe vectors [10] pre-trained on the common crawl corpus4 and SkipGram vectors pre-trained on PubMed5. We found that concatenating domain-speci c with domain-agnostic embeddings ...","url":["http://ir.cs.georgetown.edu/downloads/cikm17-cds-notes.pdf"]} |
|
{"year":"2017","title":"Deriving Neural Architectures from Sequence and Graph Kernels","authors":["T Lei, W Jin, R Barzilay, T Jaakkola - arXiv preprint arXiv:1705.09037, 2017"],"snippet":"Page 1. Deriving Neural Architectures from Sequence and Graph Kernels Tao Lei* 1 Wengong Jin* 1 Regina Barzilay 1 Tommi Jaakkola 1 Abstract The design of neural architectures for structured objects is typically guided by experimental in- sights rather than a formal process. ...","url":["https://arxiv.org/pdf/1705.09037"]} |
|
{"year":"2017","title":"Determining Entailment of Questions in the Quora Dataset","authors":["A Tung, E Xu"],"snippet":"... We used the 840B common crawl GloVe pretrained embeddings https://nlp.stanford.edu/projects/ glove/, the starter code from CS224N http://web.stanford.edu/class/cs224n/, and tuned the hyper-parameters on these various models to achieve the optimal accuracy. ...","url":["https://web.stanford.edu/class/cs224n/reports/2748301.pdf"]} |
|
{"year":"2017","title":"Distance-Aware Selective Online Query Processing Over Large Distributed Graphs","authors":["X Zhang, L Chen - Data Science and Engineering"],"url":["http://link.springer.com/article/10.1007/s41019-016-0023-z"]} |
|
{"year":"2017","title":"Distinguishing “good” from “bad” Arguments in Online Debates & Feature Analysis using Feed-Forward Neural Networks","authors":["LA Chalaguine"],"snippet":"Page 1. Imperial College London Department of Computing Distinguishing “good” from “bad” Arguments in Online Debates & Feature Analysis using Feed-Forward Neural Networks Lisa Andreevna Chalaguine Supervisor: Claudia Schulz ...","url":["https://pdfs.semanticscholar.org/526f/468ebe630e10221dc77f21ce65aba72e0021.pdf"]} |
|
{"year":"2017","title":"Distributed Algorithms on Exact Personalized PageRank","authors":["T Guo, X Cao, G Cong, J Lu, X Lin"],"snippet":"Page 1. Distributed Algorithms on Exact Personalized PageRank Tao Guo1 Xin Cao2 Gao Cong1 Jiaheng Lu3 Xuemin Lin2 1 School of Computer Science and Engineering, Nanyang Technological University, Singapore 2 School ...","url":["https://www.cs.helsinki.fi/u/jilu/documents/SIGMOD2017.pdf"]} |
|
{"year":"2017","title":"Distributed Computing in Social Media Analytics","authors":["M Riemer - Distributed Computing in Big Data Analytics, 2017"],"snippet":"... For example, [16] the current state of the art Twitter sentiment analysis technique leverages knowledge from a Common Crawl of the internet, Movie Reviews, Emoticons, and a human defined rule logic model to drastically improve the performance of its recurrent neural network ...","url":["https://link.springer.com/chapter/10.1007/978-3-319-59834-5_8"]} |
|
{"year":"2017","title":"Document Context Neural Machine Translation with Memory Networks","authors":["S Maruf, G Haffari - arXiv preprint arXiv:1711.03688, 2017"],"snippet":"Page 1. Document Context Neural Machine Translation with Memory Networks Sameen Maruf and Gholamreza Haffari Faculty of Information Technology, Monash University, VIC, Australia {firstname.lastname}@monash.edu.au Abstract …","url":["https://arxiv.org/pdf/1711.03688"]} |
|
{"year":"2017","title":"Domain Adaptation for Multilingual Neural Machine Translation","authors":["AC Varga - 2017"],"snippet":"Page 1. Universität des Saarlandes Universidad del Pa´ıs Vasco/Euskal Herriko Unibertsitatea Domain Adaptation for Multilingual Neural Machine Translation Master's Thesis submitted in fulfillment of the degree requirements of the ...urlhttps://www.clubs-project.eu/assets/publications/other/MSc_thesis_AdamVarga.pdf |
|
year2017titleDon't Let One Rotten Apple Spoil the Whole Barrel: Towards Automated Detection of Shadowed Domains","authors":["D Liu, Z Li, K Du, H Wang, B Liu, H Duan - 2017"],"snippet":"Page 1. Don't Let One Rotten Apple Spoil the Whole Barrel: Towards Automated Detection of Shadowed Domains Daiping Liu University of Delaware [email protected] Zhou Li ACM Member [email protected] Kun Du Tsinghua University [email protected] ...urlhttps://www.eecis.udel.edu/~dpliu/papers/ccs17.pdf |
|
year2017titleDoubly-Attentive Decoder for Multi-modal Neural Machine TranslationauthorsI Calixto, Q Liu, N Campbell - arXiv preprint arXiv:1702.01287, 2017snippet... M sentence pairs (Bojar et al., 2015). These include the Eu- roparl v7 (Koehn, 2005), News Commentary and Common Crawl corpora, which are concatenated and used for pre-training. We use the scripts in the Moses SMT ...urlhttps://arxiv.org/pdf/1702.01287 |
|
year2017titleDynamic Coattention Networks for Reading ComprehensionauthorsH Tepanyansnippet... using linear decoder. We use this final version with Common Crawl 840B glove vector embeddings from [2] to achieve the final scores of F1 = 58.2% and EM = 44.5% scores on the dev set. 1 Model 1. Simple Baseline Below we ...urlhttp://web.stanford.edu/class/cs224n/reports/2743745.pdf |
|
year2017titleDynamic Coattention with Sentence InformationauthorsA Ruchsnippet... choice for document length. Embeddings: We used the GloVe word embeddings for the Common Crawl 840B dataset and explored initializing the embeddings of unseen words to zero or to a random vector. Intuitively using a ...urlhttps://pdfs.semanticscholar.org/eef0/e42394c625772f5b220797661aba893012f4.pdf |
|
year2017titleDynamic Data Selection for Neural Machine TranslationauthorsM van der Wees, A Bisazza, C Monz - arXiv preprint arXiv:1708.00712, 2017snippet... The WMT training corpus contains Commoncrawl, Europarl, and News Commentary but no in-domain news data. ... We train our systems on a mixture of domains, comprising Commoncrawl, Europarl, News Commentary, EMEA, Movies, and TED. ...urlhttps://arxiv.org/pdf/1708.00712 |
|
year2017titleDynamic Space Efficient HashingauthorsT Maier, P Sanders - arXiv preprint arXiv:1705.00997, 2017snippetPage 1. Dynamic Space Efficient Hashing Tobias Maier and Peter Sanders Karlsruhe Institute of Technology, Karlsruhe, Germany {t.maier,sanders}@kit.edu Abstract We consider space efficient hash tables that can grow and ...urlhttps://arxiv.org/pdf/1705.00997 |
|
year2017titleEffect of Data Imbalance on Unsupervised Domain Adaptation of Part-of-Speech Tagging and Pivot Selection StrategiesauthorsX Cui, F Coenen, D Bollegala - Journal of Machine Learning Research, 2017snippet... (2016) to train the final adaptive classifier f only by projected features to reduce the dimensionality, where θx ∈ Rh. We use d = 300 dimensional GloVe (Pennington et al., 2014) embeddings (trained using 42B tokens from the Common Crawl) as word representations. ...urlhttps://cgi.csc.liv.ac.uk/~frans/PostScriptFiles/lidta2017.pdf |
|
year2017titleEnergy-Efficient Data Transfer Algorithms for HTTP-Based ServicesauthorsT Kosar, I Alan - arXiv preprint arXiv:1707.05730, 2017snippet... Three different representative datasets were used during experiments in order to capture the throughput and power consumption differences based on the dataset type: (i) the HTML dataset is a set of raw HTML files from the Common Crawl project [3]; (ii) the image dataset is a ...urlhttps://arxiv.org/pdf/1707.05730 |
|
year2017titleEntity linking across vision and languageauthorsAN Venkitasubramanian, T Tuytelaars, MF Moens - Multimedia Tools and …, 2017snippet... 5.2 Using a hypernym database The second approach for detecting relevant mentions uses the WebIsADb database [40] containing more than 400 million hypernymy relations extracted from the CommonCrawl web corpus. ...urlhttp://link.springer.com/article/10.1007/s11042-017-4732-8 |
|
year2017titleEstimating Missing Temporal Meta-Information using Knowledge-Based-TrustauthorsY Oulabi, C Bizersnippet... We acquired data from the sources either by manually written crawlers and extractors, or through data dumps. 5.2 Web Table Corpus For our experiments we use the Web Data Commons Web Table Corpus from 20153, which was extracted from the July 2015 Common Crawl...urlhttps://pdfs.semanticscholar.org/9016/87f0f79efd175d6c3b9efba4e254e4bc410a.pdf |
|
year2017titleEvaluating Story Generation Systems Using Automated Linguistic AnalysesauthorsM Roemmele, AS Gordon, R Swansonsnippet... We specifically used the GloVe embedding vectors [45] trained on the Common Crawl corpus8. We computed the mean cosine similarity of the vectors for all pairs of content words between a generated sentence and its context (Metric 11). ...urlhttp://people.ict.usc.edu/~roemmele/publications/fiction_generation.pdf |
|
year2017titleEvaluating vector-space models of analogyauthorsD Chen, JC Peterson, TL Griffiths - arXiv preprint arXiv:1705.04416, 2017snippet... We used the 300-dimensional word2vec vectors trained on the Google News corpus that were provided by Google (Mikolov et al., 2013), and the 300-dimensional GloVe vectors trained on a Common Crawl web crawl corpus that were provided by Pennington et al. (2014). ...urlhttps://arxiv.org/pdf/1705.04416 |
|
year2017titleEvaluation of a Feedback Algorithm inspired by Quantum Detection for Dynamic Search TasksauthorsE Di Buccio, M Meluccisnippet... Polar Domain and the Ebola Do- main. Each dataset is formatted using the Common Crawl Architecture schema from the DARPA MEMEX project, and stored as sequences of CBOR objects. The Ebola dataset refers to the outbreak ...urlhttp://trec.nist.gov/pubs/trec25/papers/UPD_IA-DD.pdf |
|
year2017titleEvent Coreference Resolution by Iteratively Unfolding Inter-dependencies among EventsauthorsPK Choubey, R Huang - arXiv preprint arXiv:1707.07344, 2017snippetPage 1. Event Coreference Resolution by Iteratively Unfolding Inter-dependencies among Events Prafulla Kumar Choubey and Ruihong Huang Department of Computer Science and Engineering Texas A&M University (prafulla.choubey, huangrh)@tamu.edu Abstract ...urlhttps://arxiv.org/pdf/1707.07344 |
|
year2017titleExplaining and Generalizing Skip-Gram through Exponential Family Principal Component AnalysisauthorsR Cotterell, A Poliak, B Van Durme, J Eisnersnippet... distributional information. The embeddings, trained on extremely large text corpora, eg, Wikipedia and the Common Crawl, are claimed to encode semantic knowledge extracted from large text corpora. While numerous models ...urlhttps://ryancotterell.github.io/papers/cotterell+alb.eacl17.pdf |
|
year2017titleExploiting Embedding in Content-Based Recommender systemsauthorsY Huang - 2016snippetPage 1. Multimedia Computing Group Exploiting Embedding in Content-Based Recommender Systems Yanbo Huang Master of Science Thesis Page 2. Page 3. Exploiting Embedding in Content-Based Recommender Systems Master of Science Thesis ...urlhttp://repository.tudelft.nl/islandora/object/uuid:cbec7bdd-4bab-4132-93cd-359587b9bf46/datastream/OBJ/view |
|
year2017titleExploring Neural Transducers for End-to-End Speech RecognitionauthorsE Battenberg, J Chen, R Child, A Coates, Y Gaur, Y Li… - arXiv preprint arXiv: …, 2017snippet... available for this benchmark from the Kaldi receipe [20]. The language model used by all models in Table 3 is built from a sample of the common crawl dataset [26]. Model specification. All models in Tables 1 and 3 are tuned ...urlhttps://arxiv.org/pdf/1707.07413 |
|
year2017titleExtending the Scope of Co-occurrence EmbeddingauthorsJ Mi, Y Wang, J Zhusnippet... Therefore, it is highly likely that our model ignores the n-grams with strong emotion, simply because they rarely occur in the training data. We expect a boost in the classification accuracy if we could train our model on a more comprehensive dataset, say, common crawl...urlhttps://web.stanford.edu/class/cs224n/reports/2758144.pdf |
|
year2017titleExtracting Conceptual Relationships and Inducing Concept Lattices from Unstructured TextauthorsVS Anoop, S Asharaf - Journal of Intelligent SystemssnippetAbstractConcept and relationship extraction from unstructured text data plays a key role in meaning aware computing paradigms, which make computers intelligent by helping them learn, interpret, and synthesis information. These concepts and relationships leverage knowledge ...urlhttps://www.degruyter.com/view/j/jisys.ahead-of-print/jisys-2017-0225/jisys-2017-0225.xml |
|
year2017titleExtracting Parallel Paragraphs from Common CrawlauthorsJ Kúdela, I Holubová, O Bojar - The Prague Bulletin of Mathematical Linguistics, 2017snippetAbstract Most of the current methods for mining parallel texts from the web assume that web pages of web sites share same structure across languages. We believe that there still exists a non-negligible amount of parallel data spread across sources not satisfying thisurlhttps://www.degruyter.com/downloadpdf/j/pralin.2017.107.issue-1/pralin-2017-0003/pralin-2017-0003.xml |
|
year2017titleExtracting Visual Knowledge from the Web with Multimodal LearningauthorsD Gong, DZ Wangsnippet... 5.1 Dataset We evaluate our approach based on a collection of web pages and images derived from the Common Crawl dataset [Smith et al., 2013] that is publicly available on Amazon S3. The entire Common Crawl dataset ...urlhttp://static.ijcai.org/proceedings-2017/0238.pdf |
|
year2017titleFast Construction of Compressed Web GraphsauthorsJ Broß, S Gog, M Hauck, M Paradies - … on String Processing and Information Retrieval, 2017snippet... Table 1). For experiments on a very large graph, we added a web graph originating from the CommonCrawl project. Table 1. ... Graphs are stored as set of adjacency lists. Each list entry occupies 4 bytes (8 bytes in case of CommonCrawl). ...urlhttps://link.springer.com/chapter/10.1007/978-3-319-67428-5_11 |
|
year2017titleFBK's Participation to the English-to-German News Translation Task of WMT 2017","authors":["MA Di Gangi, N Bertoldi, M Federico - WMT 2017, 2017"],"snippet":"... Number of training sentences. original cleaned commoncrawl 2399123 2228833 europarl-v7 1920209 1719859 news-comm-v12 270769 255944 rapid2016 1329041 1277997 both English and German. We also filtered out ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=295"]} |
|
{"year":"2017","title":"FilteredWeb: A Framework for the Automated Search-Based Discovery of Blocked URLs","authors":["A Darer, O Farnan, J Wright - arXiv preprint arXiv:1704.07185, 2017"],"snippet":"... Web search is a large and complicated business; most engines do not simply rank pages based on hyperlinks, but rather current trends and activity. One alternative to Bing is Common Crawl – an open data project that scrapes the web for pages. ...","url":["https://arxiv.org/pdf/1704.07185"]} |
|
{"year":"2017","title":"Findings of the 2017 conference on machine translation (wmt17)","authors":["O Bojar, R Chatterjee, C Federmann, Y Graham… - WMT 2017, 2017"],"snippet":"... Some training corpora were identical from last year (Europarl4, Common Crawl, SETIMES2, Russian-English parallel data provided by Yandex, Wikipedia Headlines provided by CMU) and some were updated (United Nations, CzEng v1. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=193"]} |
|
{"year":"2017","title":"Findings of the WMT 2017 Biomedical Translation Shared Task","authors":["A Jimeno Yepes, A Neveol, M Neves, K Verspoor…","AJ Yepes, A Névéol, M Neves, HPIU Potsdam… - WMT 2017, 2017"],"snippet":"... used. Tuning of the SMT systems was performed with MERT. Commoncrawl and Wikipedia were used as general domain data for all language pairs ex- cept for EN/PT, where no Commoncrawl data was provided by WMT. As ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=258","https://www.research.ed.ac.uk/portal/files/40797681/123_1.pdf"]} |
|
{"year":"2017","title":"From Segmentation to Analyses: A Probabilistic Model for Unsupervised Morphology Induction","authors":["T Bergmanis, S Goldwater"],"snippet":"... by our system and the MorphoChains baseline, we used word2vec (Mikolov et al., 2013) to train a Continuous Bag of Words model on a sub-sample of the Common Crawl (CC) corpus6 for ... 6Common Crawl http://commoncrawl.org 7Morpho Challenge 2010: http://research.ics. ...","url":["http://homepages.inf.ed.ac.uk/sgwater/papers/eacl17-morphAnalyses.pdf"]} |
|
{"year":"2017","title":"Game, Set, Match-LSTM: Question Answering on SQUaD","authors":["I Torres, E Ehizokhale"],"snippet":"... The official test dataset is not publically available. It's kept by the authors of SQuAD to make model evaluation fair. We use GloVE word vectors trained on the 840B Common Crawl corpus. We limit the max context paragraph length to 300 and the max question length to 30. ...urlhttps://web.stanford.edu/class/cs224n/reports/2761956.pdf |
|
year2017titleGeographical Evaluation of Word EmbeddingsauthorsM Konkol, T Brychcín, M Nykl, T Hercig - Proceedings of the Eighth International Joint …, 2017snippet… We use two models provided by the authors of the model trained on Wikipedia and News Crawl (LexVec - w + nc), and Common Crawl (LexVec - cc). MetaEmbeddings is an ensemble method that combines several embeddings (Yin and Schütze, 2016) …urlhttp://www.aclweb.org/anthology/I17-1023 |
|
year2017titleGlobal-Context Neural Machine Translation through Target-Side Attentive Residual ConnectionsauthorsL Miculicich, N Pappas, D Ram, A Popescu-Belis - arXiv preprint arXiv:1709.04849, 2017snippet... Finally, we use the complete English- to-German set from WMT 2016 (Bojar and others 2016)3 which includes Europarl v7, Common Crawl, and News Commentary v11 with a total of ca. 4.5 million sentence pairs. ... N-gram counts and language models from the common crawl...urlhttps://arxiv.org/pdf/1709.04849 |
|
year2017titleGlobally Normalized ReaderauthorsJ Raiman, J Miller - Proceedings of the 2017 Conference on Empirical …, 2017snippet... tion. The hidden dimension of all recurrent layers is 200. We use the 300 dimensional 8.4B token Common Crawl GloVe vectors (Pennington et al., 2014). Words missing from the Common Crawl vocabulary are set to zero. In ...urlhttp://www.aclweb.org/anthology/D17-1112 |
|
year2017titleGoogleology as smart lexicography: Big messy data for better regional labelsauthorsS Dollinger - Dictionaries: Journal of the Dictionary Society of North …, 2016snippet... help. Other services have other problems. Commoncrawl.org is one of the longest-running such projects and [End Page 72] offers big data for free. Accessing its data, however, requires serious programming expertise. Other ...urlhttps://muse.jhu.edu/article/645766/summary |
|
year2017titleGrammatical error correction in non-native EnglishauthorsZ Yuan - 2017snippetPage 1. Technical Report Number 904 Computer Laboratory UCAM-CL-TR-904 ISSN 1476-2986 Grammatical error correction in non-native English Zheng Yuan March 2017 15 JJ Thomson Avenue Cambridge CB3 0FD United Kingdom phone +44 1223 763500 ...urlhttp://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-904.pdf |
|
year2017titleHandling Homographs in Neural Machine TranslationauthorsF Liu, H Lu, G Neubig - arXiv preprint arXiv:1708.06510, 2017snippet... side. For German and French, we use a combination of Europarl v7, Common Crawl, and News Commentary as training set. For development set, newstest2013 is used for German and newstest2012 is used for French. For ...urlhttps://arxiv.org/pdf/1708.06510 |
|
year2017titleHCTI at SemEval-2017 Task 1: Use convolutional neural network to evaluate Semantic Textual SimilarityauthorsS Yangsnippet... 1) All punctuations are removed. 2) All words are lower-cased. 3) All sentences are tokenized by Natural Language Toolkit (NLTK) (Bird et al., 2009). 4) All words are replaced by pre-trained GloVe word vectors (Common Crawl, 840B tokens) (Pennington et al., 2014). ...urlhttp://nlp.arizona.edu/SemEval-2017/pdf/SemEval016.pdf |
|
year2017titleHungarian Layer: Logics Empowered Neural ArchitectureauthorsH Xiao, L Meng - arXiv preprint arXiv:1712.02555, 2017snippet… for illustration. 4.1. Experimental Setting We initialize the word embedding with 300-dimensional GloVe (Pennington et al., 2014) word vectors pre-trained in the 840B Common Crawl corpus (Pennington et al., 2014). For the …urlhttps://arxiv.org/pdf/1712.02555 |
|
year2017titleHunter MT: A Course for Young Researchers in WMT17authorsJ Xu, YZ Kuang, S Baijoo, H Lee, U Shahzad, M Ahmed… - WMT 2017, 2017snippet... ing and tuning, including Europarl v7, News Commentary v12, Rapid Corpus of EU press releases, and parts of the Common Crawl corpus ... Rapid News Test 2016 2 German-English News 33.61 News Test 2016 3 English-Czech News 13.59 Europarl, CommonCrawl, News' 12 ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=446"]} |
|
{"year":"2017","title":"IIIT-H at IJCNLP-2017 Task 4: Customer Feedback Analysis using Machine Learning and Neural Network Approaches","authors":["P Danda, P Mishra, S Kanneganti, S Lanka - … of the IJCNLP 2017, Shared Tasks, 2017"],"snippet":"… We used glove pre-trained embeddings1 (Pennington et al., 2014) for English while for the rest 1we used Common Crawl corpus with 840B tokens, 2.2M vocab, case-sensitive, 300-dimensional vectors available on https://nlp.stanford.edu/projects/glove/ 155 Page 2 …","url":["http://www.aclweb.org/anthology/I17-4026"]} |
|
{"year":"2017","title":"IITP at EmoInt-2017: Measuring Intensity of Emotions using Sentence Embeddings and Optimized Features","authors":["MS Akhtar, P Sawant, A Ekbal, J Pawar… - EMNLP 2017, 2017"],"snippet":"... For this task, we use GloVe (Pennington et al., 2014) pre-trained word embedding trained on common crawl corpus. ... The choice of common crawl word embeddings for Twitter datasets is because of the normalization steps (Section 2.1). ...","url":["http://www.aclweb.org/anthology/W/W17/W17-52.pdf#page=228"]} |
|
{"year":"2017","title":"IITP at SemEval-2017 Task 5: An Ensemble of Deep Learning and Feature Based Models for Financial Sentiment Analysis","authors":["D Ghosal, S Bhatnagar, MS Akhtar, A Ekbal…"],"snippet":"... billion and 400 million tweets respectively. For news headline we used GloVe common crawl model trained on 802 billion words and Word2Vec Google News model (Mikolov et al., 2013). We experimented with 200, 300 and ...","url":["https://www.aclweb.org/anthology/S/S17/S17-2154.pdf"]} |
|
{"year":"2017","title":"Implementation and Analysis of Match-LSTM for SQuAD","authors":["M Graczyk"],"snippet":"... We used the Common Crawl 840B Glove vectors with 300 dimensions as our input embedding, and like the paper we did not train these vectors. Finally, we used softsign instead of tanh for a substantial improvement in training performance with no obvious change in metrics. ...","url":["https://web.stanford.edu/class/cs224n/reports/2761882.pdf"]} |
|
{"year":"2017","title":"Implementation and Improvement of Match-LSTM in Question-Answering System","authors":["Y Zhang, H Peng"],"snippet":"... To initialize the word vector embeddings, we used the GloVe word embeddings of dimensionality o = 300 and vocabulary size of 2.2M that have been pre-trained on Common Crawl. We did not train the word embeddings, since our dataset is not very large. ...","url":["https://web.stanford.edu/class/cs224n/reports/2748656.pdf"]} |
|
{"year":"2017","title":"Improving Machine Translation Quality Estimation with Neural Network Features","authors":["Z Chen, Y Tan, C Zhang, Q Xiang, L Zhang, M Li… - WMT 2017, 2017"],"snippet":"... To train the word embedding and the RNNLM, the source side and the target side of the bilingual parallel corpus for the translation task, publicly re- leased by the WMT evaluation campaign, are used; they include Europarl v7, Common Crawl corpus, News Commentary v8 and ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=575"]} |
|
{"year":"2017","title":"Improving the Compositionality of Word Embeddings","authors":["MJ Scheepers - 2017"],"snippet":"… 2Real, natural and imaginary numbers are represented in computers as floating point numbers [49], which are not always exact but more often really close estimations of these numbers. 3The Common Crawl dataset can be found at: http://commoncrawl.org. Page 10. 4 …","url":["https://thijs.ai/papers/scheepers-msc-thesis-2017-improving-compositionality-word-embeddings.pdf"]} |
|
{"year":"2017","title":"Induction of Latent Domains in Heterogeneous Corpora: A Case Study of Word Alignment","authors":["H Cuong, K Sima'ansnippet... resulting SMT systems. Going beyond the findings, we surmise that virtually any large corpus (eg Europarl, Hansards, Common Crawl) harbors an arbitrary diversity of hidden domains, unknown in advance. We address the ...urlhttps://staff.fnwi.uva.nl/c.hoang/mt20171.pdf |
|
year2017titleInductive Representation Learning on Large GraphsauthorsWL Hamilton, R Ying, J Leskovec - arXiv preprint arXiv:1706.02216, 2017snippet... For features, we use off-the-shelf 300-dimensional GloVe CommonCrawl word vectors [25]; for each post, we concatenated (i) the average embedding of the post title, (ii) the average embedding of all the post's comments (iii) the post's score, and (iv) the number of comments ...urlhttps://arxiv.org/pdf/1706.02216 |
|
year2017titleInformation Extraction meets the Semantic Web: A SurveyauthorsJL Martinez-Rodriguez, A Hogan, I Lopez-Arevalosnippet… Web [234]. Mika referred to this as the semantic gap [193], whereby the demand for structured data on the Web outstrips its supply. For example, in analysis of the 2013 Common Crawl dataset, Meusel et al. [189] found that …urlhttp://www.semantic-web-journal.net/system/files/swj1744.pdf |
|
year2017titleIntegrating Knowledge from Latent and Explicit Features for Triple ScoringauthorsLW Chen, B Mangipudi, J Bandlamudi, R Sehgal…snippet... ford NLP Group. We use word embeddings of size 300 dimensions, which were pre-trained on the Common Crawl corpus 2. We integrate the learned vector representations of GloVe for nationality and profession. In a nutshell ...urlhttp://www.uni-weimar.de/medien/webis/events/wsdm-cup-17/wsdmcup17-papers-final/wsdmcup17-triple-scoring/chen17-notebook.pdf |
|
year2017titleInterstitial Content DetectionauthorsE Lucas - arXiv preprint arXiv:1708.04879, 2017snippet... 'http://servo.org'. [7] I. Kreymer. Announcing the common crawl index, April 2015. 'http://commoncrawl. org/2015/04/announcing-the-common-crawl-index/'. [8] S. Rasheed, A. Naeem, and O. Ishaq. Automated number plate recognition using hough lines and template ...urlhttps://arxiv.org/pdf/1708.04879 |
|
year2017titleISSUES IN HUMAN AND AUTOMATIC TRANSLATION QUALITY ASSESSMENTauthorsS Dohertysnippet... 2013; Bojar et al. 2014). These campaigns typically involve the provision of existing corpora (eg Europarl, News Commentary, Common Crawl, Gigaword, Wiki Headlines, and the UN Corpus) as well as WMT-commissioned translations with Page 6. 6 ...urlhttps://www.researchgate.net/profile/Stephen_Doherty3/publication/314261771_Issues_in_human_and_automatic_translation_quality_assessment/links/58bea025458515dcd28defdd/Issues-in-human-and-automatic-translation-quality-assessment.pdf |
|
year2017titleIterative Attention Network for Question AnsweringauthorsT Henighansnippet... 3.6 future studies In this work only the 100-dimensional GloVe vectors trained on 6 billion tokens was used. It may be useful to try the 300-dimensional vectors trained over a 840 billion common-crawl corpus, which 6 Page 7. ...urlhttp://www.tomhenighan.com/pdfs/iterative-attention-network.pdf |
|
year2017titleJoint Learning of Structural and Textual Features for Web Scale Event ExtractionauthorsJ Wiedmann - 2017snippet... In a second expansion step, this seed data set is further extended automatically by identifying single event pages in the Common Crawl, a repository of crawled web data, based on Microdata annotations and the annotations derived from the seed data. ...urlhttp://www.cs.ox.ac.uk/files/8846/aaai17-wiedmann-eventextraction.pdf |
|
year2017titleJoint Training for Pivot-based Neural Machine TranslationauthorsY Cheng, Q Yang, Y Liu, M Sun, W Xusnippet... sets. The evaluation metric is case-insensitive BLEU [Papineni et al., 2002] as calculated by the multi-bleu.perl script. The WMT corpus is composed of the Common Crawl, News Commentary, Europarl v7 and UN corpora. The ...urlhttp://nlp.csai.tsinghua.edu.cn/~ly/papers/ijcai2017_cy.pdf |
|
year2017titleKilling Two Birds with One Stone: Malicious Domain Detection with High Accuracy and CoverageauthorsI Khalil, B Guan, M Nabeel, T Yu - arXiv preprint arXiv:1711.00300, 2017snippetPage 1. Killing Two Birds with One Stone: Malicious Domain Detection with High Accuracy and Coverage Issa Khalil, Bei Guan, Mohamed Nabeel, Ting Yu Qatar Computing Research Institute {ikhalil,bguan,mnabeel,tyu}@hbku.edu.qa ...urlhttps://arxiv.org/pdf/1711.00300 |
|
year2017titleLDOW2017: 10th Workshop on Linked Data on the WebauthorsJ Lehmann, S Auer, S Capadisli, K Janowicz, C Bizer… - Proceedings of the 26th …, 2017snippet... Wikidata, a collaborative knowledge-base designed to complement Wikipedia; LinkedGeoData, providing a structureddata export from OpenStreetMap; and Web Data Commons, collecting embedded meta-data extracted from billions of webpages found in the Common Crawl ...urlhttp://aidanhogan.com/docs/ldow2017.pdf |
|
year2017titleLearned in Translation: Contextualized Word VectorsauthorsB McCann, J Bradbury, C Xiong, R Socher - arXiv preprint arXiv:1708.00107, 2017snippet... When training an MT-LSTM, we used fixed 300-dimensional word vectors. We used the CommonCrawl-840B GloVe model for English word vectors, which were completely fixed during training, so that the MT-LSTM had to learn how to use the pretrained vectors for translation. ...urlhttps://arxiv.org/pdf/1708.00107 |
|
year2017titleLearning bilingual word embeddings with (almost) no bilingual dataauthorsMAGLE Agirresnippet... and Italian. Given that Finnish is not in- cluded in this collection, we used the 2.8 billion word Common Crawl corpus provided at WMT 20164 instead, which we tokenized using the Stanford Tokenizer (Manning et al., 2014). In ...urlhttp://www.aclweb.org/anthology/P/P17/P17-1042.pdf |
|
year2017titleLearning Paraphrastic Sentence Embeddings from Back-Translated BitextauthorsJ Wieting, J Mallinson, K Gimpel - arXiv preprint arXiv:1706.01847, 2017snippet... EN, FR→EN, and DE→EN, respectively). The training data included: Eu- roparl v7 (Koehn, 2005), the Common Crawl corpus, the UN corpus (Eisele and Chen, 2010), News Commentary v10, the 109 French-English corpus, ...urlhttps://arxiv.org/pdf/1706.01847 |
|
year2017titleLearning to Predict: A Fast Re-constructive Method to Generate Multimodal EmbeddingsauthorsG Collell, T Zhang, MF Moens - arXiv preprint arXiv:1703.08737, 2017snippet... 4 Experimental setup 4.1 Word embeddings We use 300-dimensional GloVe1 vectors [19] pre-trained on the Common Crawl corpus consisting of 840B tokens and a 2.2M words vocabulary. 4.2 Visual data and features We use ImageNet [17] as our source of labeled images. ...urlhttps://arxiv.org/pdf/1703.08737 |
|
year2017titleLearning to select data for transfer learning with Bayesian OptimizationauthorsS Ruder, B Plank - arXiv preprint arXiv:1707.05246, 2017snippet... We train an LDA model (Blei et al., 2003) with 50 topics and 10 iterations for topic distribution-based representations and use GloVe embeddings (Pennington et al., 2014) trained on 42B tokens of Common Crawl data6 for word embedding-based representations. ...urlhttps://arxiv.org/pdf/1707.05246 |
|
year2017titleLength, Interchangeability, and External Knowledge: Observations from Predicting Argument ConvincingnessauthorsP Potash, R Bhattacharya, A Rumshiskysnippet... and create the appropriate representation. For the embedding representation, we use GloVe (Pennington et al., 2014) 300 dimensions learned from the Common Crawl corpus with 840 billion tokens. Our Wikipedia data is from ...urlhttps://pdfs.semanticscholar.org/9785/f21ac0b33689dc3ae711a94383eda01785e9.pdf |
|
year2017titleLexically Constrained Decoding for Sequence Generation Using Grid Beam SearchauthorsC Hokamp, Q Liu - arXiv preprint arXiv:1704.07138, 2017snippetPage 1. Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search Chris Hokamp ADAPT Centre Dublin City University [email protected] Qun Liu ADAPT Centre Dublin City University [email protected] Abstract ...urlhttps://arxiv.org/pdf/1704.07138 |
|
year2017titleLIG-CRIStAL Submission for the WMT 2017 Automatic Post-Editing TaskauthorsA Berard, L Besacier, O Pietquin - Proceedings of the Second Conference on …, 2017snippet... To mitigate this, we decided to limit our use of external data to monolingual English (commoncrawl). ... PE side Similarly to Junczys-Dowmunt and Grundkiewicz (2016) we first performed a coarse filtering of well-formed sentences of commoncrawl...urlhttp://www.aclweb.org/anthology/W17-4772 |
|
year2017titleLIG-CRIStAL System for the WMT17 Automatic Post-Editing TaskauthorsA Berard, O Pietquin, L Besacier - arXiv preprint arXiv:1707.05118, 2017snippet... To mitigate this, we decided to limit our use of external data to monolingual English (commoncrawl). ... PE side Similarly to Junczys-Dowmunt and Grundkiewicz (2016) we first performed a coarse filtering of well-formed sentences of commoncrawl...urlhttps://arxiv.org/pdf/1707.05118 |
|
year2017titleLIMSI submission for WMT'17 shared task on bandit learning","authors":["G Wisniewski - WMT 2017, 2017"],"snippet":"... At the end, our monolingual corpus contain 193292548 sentences. The translation model is estimated from the CommonCrawl, NewsCo, Europarl and Rapid corpora, resulting in a parallel corpus made of 5919142 sentences. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=698"]} |
|
{"year":"2017","title":"Linked Data is People: Building a Knowledge Graph to Reshape the Library Staff Directory","authors":["JA Clark, SWH Young"],"snippet":"... RDFa and JSON-LD are two other syntax encodings that enable structured data in HTML pages. In their analysis of structured data in the Common Crawl dataset, Bizer et al. (2013) note that the growth of machine-readable descriptions of web content continues to grow. ...","url":["http://journal.code4lib.org/articles/12320"]} |
|
{"year":"2017","title":"LMU Munich's Neural Machine Translation Systems for News Articles and Health Information TextsauthorsM Huck, F Braune, A Frasersnippet... 1. Adding the News Commentary (NC) and Common Crawl (CC) parallel training data as provided for WMT17 by the organizers of the news translation shared task. We initialize the optimization on the larger corpus with the Europarl-trained baseline model. ...urlhttp://www.cis.uni-muenchen.de/~fraser/pubs/huck_wmt2017_system.pdf |
|
year2017titleLump at SemEval-2017 Task 1: Towards an Interlingua Semantic SimilarityauthorsC España-Bonet, A Barrón-Cedeño - Proceedings of the 11th International Workshop …, 2017snippet... 2http://commoncrawl.org/ 3http://www.casmacat.eu/corpus/ news-commentary.html 4https://sites.google.com/site/ iwsltevaluation2016/mt-track/ 5We built a version of the lemma translator with an extra language: Babel synsets (cf. ...urlhttp://www.aclweb.org/anthology/S17-2019 |
|
year2017titleMachine Comprehension Using Multi-Perspective Context Matching and Co-AttentionauthorsA Bajenov, T Guptasnippet... Embedding Layer Trainable or fixed embeddings Dropout post-embedding layer Gigaword (6B) or Common Crawl (840B) corpus Architecture Choices Number of layers Type of Layers Representation Sizes embedding size (100-300) lstm units (100-200) perspective units ...urlhttp://web.stanford.edu/class/cs224n/reports/2758309.pdf |
|
year2017titleMachine Comprehension with MMLSTM and ClusteringauthorsT Romero, Z Barnes, F Cipollonesnippet... 1.2 Data Our model is trained on the SQuAD dataset [1]. We split the SQuAD dataset up into 82k training questions, 5k validation questions, and 10k dev questions. The set of test questions is withheld. We use 300d Common Crawl GloVe [2] vectors for our word embeddings. ...urlhttps://web.stanford.edu/class/cs224n/reports/2761209.pdf |
|
year2017titleMachine Question and AnsweringauthorsJ Chang, M Jiang, D Lesnippet... 5 Page 6. were also initialized with 300-dimensional GloVe word vectors from the 840B Common Crawl corpus (Pennington et al., 2014). The above plots illustrate the cross entropy loss (left) and F1 score (right) vs epoch and clearly show overfitting. ...urlhttps://web.stanford.edu/class/cs224n/reports/2761996.pdf |
|
year2017titleMachine Translation Evaluation with Neural NetworksauthorsF Guzmán, S Joty, L Màrquez, P Nakov - Computer Speech & Language, 2016snippetWe present a framework for machine translation evaluation using neural networks in a pairwise setting, where the goal is to select the better translation from a.urlhttp://www.sciencedirect.com/science/article/pii/S0885230816301693 |
|
year2017titleMachine Translation: Phrase-Based, Rule-Based and Neural Approaches with Linguistic EvaluationauthorsV Macketanz, E Avramidis, A Burchardt, J Helcl… - Cybernetics and Information …, 2017snippet... difference). The generic parallel training data (Europarl [18], News Commentary, MultiUN [19], Commoncrawl [20]) are augmented with domain-specific data from the IT domain (Libreoffice, Ubuntu, Chromium Browser [21]). ...urlhttps://www.degruyter.com/downloadpdf/j/cait.2017.17.issue-2/cait-2017-0014/cait-2017-0014.xml |
|
year2017titleMassive Exploration of Neural Machine Translation ArchitecturesauthorsD Britz, A Goldie, T Luong, Q Le - arXiv preprint arXiv:1703.03906, 2017snippet... 3 Experimental Setup 3.1 Datasets and Preprocessing We run all experiments on the WMT'15 English→German task consisting of 4.5M sentence pairs, obtained by combining the Europarl v7, News Commentary v10, and Common Crawl corpora. ...","url":["https://arxiv.org/pdf/1703.03906"]} |
|
{"year":"2017","title":"Matching Web Tables To DBpedia-A Feature Utility Study","authors":["D Ritze, C Bizer - context, 2017"],"snippet":"... 215 Page 7. has been extracted from the CommonCrawl web corpus3. ... 3http://commoncrawl org/ Table 3 shows the results of the correlation analysis for the property and instance similarity matrices regarding precision, eg PPstdev, and recall, eg RPstdev. ...","url":["https://openproceedings.org/2017/conf/edbt/paper-148.pdf"]} |
|
{"year":"2017","title":"Methods of sentence extraction, abstraction and ordering for automatic text summarization","authors":["MT Nayeem - 2017"],"snippet":"Page 1. METHODS OF SENTENCE EXTRACTION, ABSTRACTION AND ORDERING FOR AUTOMATIC TEXT SUMMARIZATION MIR TAFSEER NAYEEM Bachelor of Science, Islamic University of Technology, 2011 A Thesis …","url":["https://www.uleth.ca/dspace/bitstream/handle/10133/4993/NAYEEM_MIR_TAFSEER_MSC_2017.pdf?sequence=1"]} |
|
{"year":"2017","title":"Modeling Target-Side Inflection in Neural Machine Translation","authors":["A Tamchyna, MWD Marco, A Fraser - arXiv preprint arXiv:1707.06012, 2017"],"snippet":"Page 1. Modeling Target-Side Inflection in Neural Machine Translation Aleš Tamchyna1,2 and Marion Weller-Di Marco1,3 and Alexander Fraser1 1LMU Munich, 2Memsource, 3University of Stuttgart ales.tamchyna@memsource ...","url":["https://arxiv.org/pdf/1707.06012"]} |
|
{"year":"2017","title":"Modeling the Dynamic Framing of Controversial Topics in Online Communities","authors":["J Mendelsohn"],"snippet":"... After preprocessing, each post is a bag-of-words of variable length: p (n) ij = [w1,w2, ..., wl]. Each word in a post is represented by its 300-dimensional GloVe vector, trained on Common Crawl data (CITE GLOVE). Posts are then represented as the average of each word's vector. ...urlhttp://web.stanford.edu/class/cs224n/reports/2761128.pdf |
|
year2017titleMonotasks: Architecting for Performance Clarity in Data Analytics FrameworksauthorsK Ousterhout, C Canel, S Ratnasamy, S Shenker - 2017snippetPage 1. Monotasks: Architecting for Performance Clarity in Data Analytics Frameworks Kay Ousterhout UC Berkeley Christopher Canel⇤ Carnegie Mellon University Sylvia Ratnasamy UC Berkeley Scott Shenker UC Berkeley, ICSI ...urlhttp://kayousterhout.org/publications/sosp17-final183.pdf |
|
year2017titleMulti-channel Encoder for Neural Machine TranslationauthorsH Xiong, Z He, X Hu, H Wu - arXiv preprint arXiv:1712.02109, 2017snippet… WMT'14 English-French. We use the full WMT' 14 parallel corpus as our training data. The detailed data sets are Europarl v7, Common Crawl, UN, News Commentary, Gi- gaword. In total, it includes 36 million sentence pairs …urlhttps://arxiv.org/pdf/1712.02109 |
|
year2017titleMulti-Domain Neural Machine Translation through Unsupervised AdaptationauthorsMA Farajian, M Turchi, M Negri, M Federicosnippet... PHP, Ubuntu, and translated UN documents (UN-TM).2 Since the size of these corpora is relatively small for training robust MT systems, in particular NMT solutions, we added the News Commentary data from WMT'133(WMT nc), as well as the CommonCrawl (CommonC.) and ...","url":["https://hermessvn.fbk.eu/svn/hermes/open/federico/papers/Amin_et.al-wmt2017.pdf"]} |
|
{"year":"2017","title":"Multimodal Learning for Web Information Extraction","authors":["D Gong, DZ Wang, Y Peng - ACM International Conference on Multimedia, 2017"],"snippet":"… Collecting image corpus. The image corpus is not included in the Common Crawl data [25] where we derived text corpus … 5.1.2 Corpus. We derive our text and image corpus based on the Common Crawl dataset [25] that is publicly available on Amazon S3 …","url":["https://pdfs.semanticscholar.org/d2a5/815007832255a033759d25d771157ae9be16.pdf"]} |
|
{"year":"2017","title":"Multimodal sentiment analysis with word-level fusion and reinforcement learning","authors":["M Chen, S Wang, PP Liang, T Baltrušaitis, A Zadeh… - Proceedings of the 19th …, 2017"],"snippet":"… For text inputs, we use pre-trained word embeddings (glove.840B.300d) [19] to convert the transcripts of videos in the CMU-MOSI dataset into word vectors. This is a 300 dimensional word embedding trained on 840 billion tokens from the common crawl dataset …","url":["http://dl.acm.org/citation.cfm?id=3136755.3136801"]} |
|
{"year":"2017","title":"Multiple Turn Comprehension for the Bi-Directional Attention Flow Model","authors":["T Liu"],"snippet":"... The word embedding layer converts each word in the context and question into a dense vector word representation. We use the pre-trained GloVe (Pennington et al., 2014) vectors for this layer, in particular the Common Crawl 840B tokens, 300d vectors. ...","url":["http://web.stanford.edu/class/cs224n/reports/2761890.pdf"]} |
|
{"year":"2017","title":"Named Entity Recognition in Twitter using Images and Text","authors":["D Esteves, R Peres, J Lehmann, G Napolitano"],"snippet":"... A disadvantage when using web search engines is that they are not open and free. This can be circumvented by indexing and searching on other large sources of information, such as Common Crawl and Flickr11. ... 11 http://commoncrawl.org/ and https://www.flickr.com/ Page 7. 7 ...","url":["https://www.researchgate.net/profile/Diego_Esteves/publication/317721565_Named_Entity_Recognition_in_Twitter_using_Images_and_Text/links/594a85dda6fdcc89090cb5f5/Named-Entity-Recognition-in-Twitter-using-Images-and-Text.pdf"]} |
|
{"year":"2017","title":"Native Language Identification from i-vectors and Speech Transcriptions","authors":["B Ulmer, A Zhao, N Walsh"],"snippet":"... word (Pennington et al., 2014). The GloVe embeddings of words came from the Common Crawl 42B to- kens collection, and the 300 dimensional embeddings were used (Pennington et al., 2014). If no corresponding GloVe ...","url":["http://web.stanford.edu/class/cs224s/reports/Ben_Ulmer.pdf"]} |
|
{"year":"2017","title":"Natural Language Question-Answering using Deep Learning","authors":["B Liu, F Lyu, R Roy"],"snippet":"... We experimented with both fixed 193 CommonCrawl.840B.300d pretrained word vectors and GLoVE.6B.100d pretrained word 194 vectors (Pennington, Socher, & Manning, 2015) 195 We enforce a fixed question length of 22 words, and fixed context length of 300 words. ...","url":["https://pdfs.semanticscholar.org/505a/ed7c751eb57bf5e59ab1cedc49448376b7d5.pdf"]} |
|
{"year":"2017","title":"Neural Lie Detection with the CSC Deceptive Speech Dataset","authors":["S Desai, M Siegelman, Z Maurer"],"snippet":"... Each acoustic feature frame was 34 dimensional and each speaker-dependent frame was 68 dimensional. Lexical features were encoded using GloVe Wikipedia and CommonCrawl 100-dimensional embeddings[9] based on the transcripts provided with the dataset. ...","url":["http://web.stanford.edu/class/cs224s/reports/Shloka_Desai.pdf"]} |
|
{"year":"2017","title":"Neural Machine Translation Leveraging Phrase-based Models in a Hybrid Search","authors":["L Dahlmann, E Matusov, P Petrushkov, S Khadivi - arXiv preprint arXiv:1708.03271, 2017"],"snippet":"... For development and test sets, two reference translations are used. The German→English system is trained on parallel corpora provided for the constrained WMT 2017 evaluation (Europarl, Common Crawl, and others). We ...","url":["https://arxiv.org/pdf/1708.03271"]} |
|
{"year":"2017","title":"Neural Machine Translation Training in a Multi-Domain Scenario","authors":["H Sajjad, N Durrani, F Dalvi, Y Belinkov, S Vogel - arXiv preprint arXiv:1708.08712, 2017","HSNDF Dalvi, Y Belinkov, S Vogel"],"snippet":"... For German-English, we use the Europarl (EP), and the Common Crawl (CC) corpora made available for the 1st Conference on Statistical Machine Translation2 as out- of-domain corpus. ... EP = Europarl, CC = Common Crawl, UN = United Nations. ...","url":["https://arxiv.org/pdf/1708.08712","https://www.researchgate.net/profile/Nadir_Durrani/publication/319349687_Neural_Machine_Translation_Training_in_a_Multi-Domain_Scenario/links/59d0f2a3aca2721f43673f75/Neural-Machine-Translation-Training-in-a-Multi-Domain-Scenario.pdf"]} |
|
{"year":"2017","title":"Neural Machine Translation with LSTM'sauthorsJ Dhaliwalsnippet... 3. dev08 11 - old dev dat from 2008 to 2011 (0.3M) 4. crawl - data from common crawl (90M) 5. ccb2 - 109 parallel corpus (81M) ... 3. dev08 11 - old dev dat from 2008 to 2011 (0.3M) 4. crawl - data from common crawl (90M) 5. ccb2 pc30109 parallel corpus (81M) ...urlhttps://people.umass.edu/~jdhaliwal/files/s2s.pdf |
|
year2017titleNeural Networks and Spelling Features for Native Language IdentificationauthorsJ Bjerva, G Grigonyte, R Ostling, B Plank - Bronze Sponsors, 2017snippet... PoS tags are represented by 64-dimensional embeddings, initialised randomly; word tokens by 300-dimensional embeddings, initialised with GloVe (Pennington et al., 2014) em- beddings trained on 840 billion words of English web data from the Common Crawl project. ...urlhttp://www.aclweb.org/anthology/W/W17/W17-50.pdf#page=255 |
|
year2017titleNeural vs. Phrase-Based Machine Translation in a Multi-Domain ScenarioauthorsMA Farajian, M Turchi, M Negri, N Bertoldi, M Federico - EACL 2017, 2017snippet... K PHP 38.4 K 259.0 K 9.7 K Ubuntu 9.0 K 47.7 K 8.6 K UN-TM 40.3 K 913.8 K 12.5 K CommonCrawl 2.6 M ... in particular NMT solutions, we used CommonCrawl and Europarl corpora as out-domain data in addition to the above-mentioned domain-specific corpora, resulting in ...urlhttp://www.aclweb.org/anthology/E/E17/E17-2.pdf#page=312 |
|
year2017titleNew Word Pair Level Embeddings to Improve Word Pair SimilarityauthorsA Shaukat, N Khansnippet... Many previous approaches present embeddings for individual words [14, 15, 16, 27] using their distributional semantics (Common Crawl corpus1) and structured knowledge from ConceptNet and PPDB [31]. ... Figure 1 shows 1 http://commoncrawl.org/ ...urlhttp://faculty.pucit.edu.pk/nazarkhan/work/wps/wpe_icdar_wml17.pdf |
|
year2017titleNEWSQA: AMachine COMPREHENSION DATASETauthorsA Trischler, T Wang, X Yuan, J Harris, A Sordoni…snippet... Both mLSTM and BARB are implemented with the Keras framework (Chollet, 2015) using the Theano (Bergstra et al., 2010) backend. Word embeddings are initialized using GloVe vectors (Pennington et al., 2014) pre-trained on the 840-billion Common Crawl corpus. ...urlhttps://www.openreview.net/pdf?id=ry3iBFqgl |
|
year2017titleNoSQL Web Crawler ApplicationauthorsGC Deka - Advances in Computers, 2017snippetWith the advent of Web technology, the Web is full of unstructured data called Big Data. However, these data are not easy to collect, access, and process at lar.urlhttp://www.sciencedirect.com/science/article/pii/S0065245817300323 |
|
year2017titleNovel Ranking-Based Lexical Similarity Measure for Word EmbeddingauthorsJ Dutkiewicz, C Jędrzejek - arXiv preprint arXiv:1712.08439, 2017snippet… 4.1 Experimental setup We use the unmodified vector space model trained on 840 billion words from Common Crawl data with the GloVe algorithm introduced in Pennington et al. (2014). The model consists of 2.2 million unique vectors; Each vector consists of 300 components …urlhttps://arxiv.org/pdf/1712.08439 |
|
year2017titleNRC Machine Translation System for WMT 2017authorsC Lo, S Larkin, B Chen, D Stewart, C Cherry, R Kuhn… - WMT 2017, 2017snippet... 2 Russian-English news translation We used all the Russian-English parallel corpora available for the constrained news translation task. They include the CommonCrawl corpus, the NewsCommentary v12 corpus, the Yandex corpus and the Wikipedia headlines corpus. ...urlhttp://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=354 |
|
year2017titleOn the Effective Use of Pretraining for Natural Language InferenceauthorsI Cases, MT Luong, C Potts - arXiv preprint arXiv:1710.02076, 2017snippet... a 1We used the publicly released embeddings, trained with Common Crawl 840B tokens for GloVe (http:// nlp.stanford.edu/projects/glove/) and Google News 42B for word2vec https://code.google.com/ archive/p/word2vec/. Although ...urlhttps://arxiv.org/pdf/1710.02076 |
|
year2017titleOptimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling TasksauthorsN Reimers, I Gurevych - arXiv preprint arXiv:1707.06799, 2017NRI Gurevychsnippet... (2014) trained either on Wikipedia 2014 + Gigaword 5 (about 6 billion tokens) or on Common Crawl (about 840 billion tokens), and the Komninos and Manandhar (2016) embeddings11 trained on the Wikipedia August 2015 dump (about 2 billion tokens). ...urlhttps://arxiv.org/pdf/1707.06799https://www.arxiv-vanity.com/papers/1707.06799v2/ |
|
year2017titleParallel Training Data Selection for Conversational Machine TranslationauthorsX Niu, M Carpuatsnippet... Corpus # Sentences # Words (en/fr) OpenSubtitles 33.5 M 284.0 M / 268.3 M MultiUN 13.2 M 367.1 M / 432.3 M Common Crawl 3.2 M 81.1 M / 91.3 M Europarl v7 2.0 M 55.7 M / 61.9 M Wikipedia 396 k 9.7 M / 8.7 M TED corpus 207 k 4.5 M / 4.8 M News Commentary v10 199 k ...urlhttps://pdfs.semanticscholar.org/fdf6/ae86229f51893dd6e33579511489af4a5eb7.pdf |
|
year2017titlePassfault: an Open Source Tool for Measuring Password Complexity and StrengthauthorsBA Rodrigues, JRB Paiva, VM Gomes, C Morris…snippet... Wikipedia: The full text of Wikipedia in 2015. • Reddit: The corpus of Reddit comments through May 2015. • CCrawl: Text extracted from the Common Crawl and language-detected with cld2. Page 6. ACKNOWLEDGMENTS ...urlhttps://www.owasp.org/images/1/13/Artigo-Passfault.pdf |
|
year2017titlePredictor-Estimator using Multilevel Task Learning with Stack Propagation for Neural Quality EstimationauthorsH Kim, JH Lee, SH Na - WMT 2017, 2017snippet... allel corpora including the Europarl corpus, common crawl corpus, news commentary, rapid corpus of EU press releases for the WMT17 translation task3, and src-pe (source sentences-their target post-editions) pairs for the WMT17 QE task. ...urlhttp://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=586 |
|
year2017titlePredictor-Estimator: Neural Quality Estimation Based on Target Word Prediction for Machine TranslationauthorsH Kim, HY Jung, H Kwon, JH Lee, SH Na - ACM Transactions on Asian and Low- …, 2017snippet... For training the word predictor, two parallel datasets of different sizes were used: a small dataset consisting of only the Europarl corpus (Koehn 2005) and a large dataset consisting of the Europarl corpus, common crawl corpus, and news commentary, which were provided for ...urlhttp://dl.acm.org/citation.cfm?id=3109480 |
|
year2017titleProbabilistic Relation Induction in Vector Space EmbeddingsauthorsZ Bouraoui, S Jameel, S Schockaert - arXiv preprint arXiv:1708.06266, 2017snippet... data set1 (SG-GN). We also use two embeddings that have been learned with GloVe, one from the same Wikipedia dump (GloVe-Wiki) and one from the 840B words Common Crawl data set2 (GloVe-CC). For relations with at ...urlhttps://arxiv.org/pdf/1708.06266 |
|
year2017titleProposal for Automatic Extraction of Taxonomic Relations in Domain CorpusauthorsHRL Chavez, MT Vidal - Advances in Pattern Recognitionsnippet… His methodology is based on two sources of evidence, substring matches and Hearts patterns. They analyze all Wikipedia in search of the Hearts patterns and extract those relationships and make use of another corpus like GigaWord, ukWac and CommonCrawl. 30 …urlhttp://www.rcs.cic.ipn.mx/rcs/2017_133/Proposal%20for%20Automatic%20Extraction%20of%20Taxonomic%20Relations%20in%20Domain%20Corpus.pdf |
|
year2017titleProvDS: Uncertain Provenance Management over Incomplete Linked Data StreamsauthorsQ Liusnippet... These datasets will be used to evaluate our provenance computation over incomplete Linked Data Streams techniques. • The Web Data Commons project5 extracts structured data from the Common Crawl, the largest web corpus available to the public. ...urlhttps://iswc2017.semanticweb.org/wp-content/uploads/papers/DC/paper_2.pdf |
|
year2017titlePushing the Limits of Paraphrastic Sentence Embeddings with Millions of Machine TranslationsauthorsJ Wieting, K Gimpel - arXiv preprint arXiv:1711.05732, 2017snippet… The model was trained on this data along with data from some smaller Czech sources ( 160k from common-crawl, 650k from Europarlm and 190k from News) … We compare with Eu- roparl, Common-crawl, and News for Czech …urlhttps://arxiv.org/pdf/1711.05732 |
|
year2017titleQuantext: Analysing student responses to short-answer questionsauthorsJ McDonald, ACM Moskalsnippet… 1 Similarity is calculated from a word2vec model of word embeddings using the GloVe algorithm (Pennington, Socher & Manning, 2014) and is pre-trained on the Common Crawl Corpus (Spiegler, 2013) … 1532-1543 Spiegler, S (2013) Statistics of the Common Crawl Corpus …urlhttps://www.researchgate.net/profile/Adon_Moskal/publication/321266093_Quantext_Analysing_student_responses_to_short-answer_questions/links/5a179890a6fdcc50ade61806/Quantext-Analysing-student-responses-to-short-answer-questions.pdf |
|
year2017titleQuestion Answering on SQuADauthorsC Yang, H Ishfaqsnippet... Then we use word embeddings from GloVe[6] to map words into embedding vectors. To decrease the out of vocabulary (OOV) error, we use the Common Crawl 840B 300d GloVe vectors. Words not found in GloVe are initialized randomly. ...urlhttps://web.stanford.edu/class/cs224n/reports/2749099.pdf |
|
year2017titleQuestion Answering on the SQuAD DatasetauthorsDH Park, V Lakshmansnippet... Initially, we used 100-dimensional word embeddings pretrained on the Wikipedia corpus to train our model before fine-tuning our system by switching to 300-dimensional GloVe vectors trained on the Common Crawl corpus. ...urlhttps://web.stanford.edu/class/cs224n/reports/2761899.pdf |
|
year2017titleQuestion Answering with Multi-Perspective Context MatchingauthorsJ Aspergersnippet... The word-level embeddings were taken from GloVe vectors that were pre-trained on the 840-billion-word Common Crawl Corpus. ... For my word representations, I used 300-dimensional GloVe vectors trained on the 840 billion word Common Crawl Corpus. ...urlhttps://pdfs.semanticscholar.org/599f/376502c61550fdd37011e0cb7157d281b493.pdf |
|
year2017titleReading Comprehension on the SQuAD DatasetauthorsFNU Budiantosnippet... The Glove version used is the Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors). Figure 1: Histogram of context length, question length, and answer length in the training set. 2 Page 3. ... I use the Glove840B300d Common Crawl for the word embedding layer. ...urlhttps://web.stanford.edu/class/cs224n/reports/2762006.pdf |
|
year2017titleRecurrent neural networks with specialized word embeddings for health-domain named-entity recognitionauthorsIJ Unanue, EZ Borzeshi, M Piccardi - arXiv preprint arXiv:1706.09569, 2017snippet... Therefore, the training of the word embeddings only requires large, general-purpose text corpora such as Wikipedia (400K unique words) or Common Crawl (2.2M unique words), without the need for any manual annotation. ...urlhttps://arxiv.org/pdf/1706.09569 |
|
year2017titleRegularizing neural networks by penalizing confident output distributionsauthorsG Pereyra, G Tucker, J Chorowski, Ł Kaiser, G Hinton - arXiv preprint arXiv: …, 2017snippet... 535–541. ACM, 2006. Christian Buck, Kenneth Heafield, and Bas Van Ooyen. N-gram counts and language models from the common crawl. In LREC, volume 2, pp. 4. Citeseer, 2014. 8 Page 9. Under review as a conference paper at ICLR 2017 ...urlhttps://arxiv.org/pdf/1701.06548 |
|
year2017titleReinvestigating the Classification Approach to the Article and Preposition Error CorrectionauthorsR Grundkiewicz, M Junczys-Dowmuntsnippet... Other than that, default options were used. We learnt word vectors from 75 millions of English sentences extracted from Common Crawl data4. ... 3 https://code.google.com/p/word2vec/ 4 https://commoncrawl.org/ 5 http://www.comp.nus.edu.sg/~nlp/conll14st.html Page 6. ...urlhttp://www.research.ed.ac.uk/portal/files/40342436/ltc_073_grundkiewicz_2.pdf |
|
year2017titleReport on the 2nd Workshop on Managing the Evolution and Preservation of the Data Web (MEPDaW 2016)authorsJ Debattista, JD Fernández, J Umbrichsnippet... 1Slides of the talk: https://aic.ai.wu.ac.at/ polleres/presentations/20160530Keynote-MEPDaW2016. pdf 2http://commoncrawl.org/ 3http://internetmemory.org/ 4https://archive.org/index.php 5http://swse.deri.org/dyldo/ ACM SIGIR Forum 84 Vol. 50 No. 2 December 2016 Page 4. ...urlhttps://pdfs.semanticscholar.org/c1eb/93952ed5cc4bda08bdd75bed84332656d864.pdf |
|
year2017titleReporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence TaggingauthorsN Reimers, I Gurevych - arXiv preprint arXiv:1707.09861, 2017snippet... (2014) trained either on Wikipedia 2014 + Gigaword 5 (GloVe1 with 100 dimensions and GloVe2 with 300 dimensions) or on Common Crawl (GloVe3), and the Komninos and Manandhar (2016) embeddings (Komn.)10. We also evaluate the approach of Bojanowski et al. ...urlhttps://arxiv.org/pdf/1707.09861 |
|
year2017titleRepresentation Stability as a Regularizer for Improved Text Analytics Transfer LearningauthorsM Riemer, E Khabiri, R Goodwin - arXiv preprint arXiv:1704.03617, 2017snippet... Our GRU model was fed a sequence of fixed 300 dimensional Glove vectors (Pennington et al., 2014), representing words based on analysis of 840 billion words from a common crawl of the internet, as the input xt for all tasks. ...urlhttps://arxiv.org/pdf/1704.03617 |
|
year2017titleRepresenting Sentences as Low-Rank SubspacesauthorsJ Mu, S Bhat, P Viswanath - arXiv preprint arXiv:1704.05358, 2017snippet... Due to the widespread use of word2vec and GloVe, we use their publicly available word representations – word2vec(Mikolov et al., 2013) trained us- ing Google News1 and GloVe (Pennington et al., 2014) trained using Common Crawl2 – to test our observations. ...urlhttps://arxiv.org/pdf/1704.05358 |
|
year2017titleRetrieval, Crawling and Fusion of Entity-centric Data on the WebauthorsS Dietzesnippet... Page 8. linked data world. However, the question to what extent this is due to the se- lective content of the Common Crawl or representative for schema.org adoption on the Web in general requires additional investigations. (a ...urlhttps://www.researchgate.net/profile/Stefan_Dietze/publication/312490472_Retrieval_Crawling_and_Fusion_of_Entity-centric_Data_on_the_Web/links/587e683808aed3826af45f18.pdf |
|
year2017titleRule-based spreadsheet data transformation from arbitrary to relational tablesauthorsAO Shigarov, AA Mikhailov - Information Systems, 2017snippet... These include about 50% of tables presented in 0.4M spreadsheets of ClueWeb09 Crawl 1 [5] and 147M (61%) of 233M web tables extracted from Common Crawl 2 [3]. They lack explicit semantics required for computer programs to interpret their layout and content. ...urlhttp://www.sciencedirect.com/science/article/pii/S0306437917304301 |
|
year2017titleS3C: An Architecture for Space-Efficient Semantic Search over Encrypted Data in the CloudauthorsJ Woodworth, MA Salehi, V Raghavansnippet... To evaluate our system under Big data scale datasets, we utilized a second dataset, the Common Crawl Corpus from AWS, a web crawl composed of over five billion web pages We evaluated our system against the RFC using three types of metrics: Performance, Overhead ...urlhttp://hpcclab.org/paperPdf/bigdata16/bigdata16.pdf |
|
year2017titleScattertext: a Browser-Based Tool for Visualizing how Corpora DifferauthorsJS Kessler - arXiv preprint arXiv:1703.00565, 2017snippetPage 1. Scattertext: a Browser-Based Tool for Visualizing how Corpora Differ Jason S. Kessler CDK Global [email protected] Abstract Scattertext is an open source tool for visualizing linguistic variation between document categories in a language-independent way. ...urlhttps://arxiv.org/pdf/1703.00565 |
|
year2017titleScientific Literature Text Mining and the Case for Open AccessauthorsG Sarmasnippet… science and society of scientific literature text mining. We need a scientific analogue to CommonCrawl, an open respository of scientific articles for use in exploratory data analysis. Ironically, this argument is not new, and indeed …urlhttps://www.tjoe.org/pub/scientific-literature-text-mining-and-the-case-for-open-access |
|
year2017titleSecure Semantic Search Over Encrypted Big Data in the CloudauthorsJW Woodworth - 2017snippetPage 1. Secure Semantic Search Over Encrypted Big Data in the Cloud A Dissertation Presented to the Graduate Faculty of the University of Louisiana at Lafayette In Partial Fulfillment of the Requirements for the Degree Master's of Science Jason W. Woodworth Spring 2017 ...","url":["http://hpcclab.org/theses/jasonwoodworth17.pdf"]} |
|
{"year":"2017","title":"SEF@ UHH at SemEval-2017 Task 1: Unsupervised knowledge-free semantic textual similarity via paragraph vector","authors":["MS Duma, W Menzel - Proceedings of SemEval-2017. http://www. aclweb. org/ …, 2017"],"snippet":"... Track / Corpora AR-AR AR-EN ES-ES ES-EN EN-EN TR-EN Commoncrawl - - 1.84M - 2.39M - Wikipedia 151K 151K - 1.81M - 160K TED 152K 152K - 157K - 137K MultiUN 1M 1M - - - - EUBookshop - - - - - 23K SETIMES - - - - - 207K Tatoeba - - - - - 156K SNLI* - 150K - 150K ...","url":["https://www.aclweb.org/anthology/S/S17/S17-2024.pdf"]} |
|
{"year":"2017","title":"Selective Decoding for Cross-lingual Open Information Extraction","authors":["S Zhang, K Duh, B Van Durme"],"snippet":"... The word embedding size is 300 for input tokens on both the encoder side and the decoder side. We use open-source GloVe vectors (Pennington et al., 2014) trained on Common Crawl 840B with 300 dimensions6 to initialize the word embeddings on the decoder side. ...","url":["https://www.cs.jhu.edu/~s.zhang/assets/pdf/selective-decoding.pdf"]} |
|
{"year":"2017","title":"Semantic Specialisation of Distributional Word Vector Spaces using Monolingual and Cross-Lingual Constraints","authors":["N Mrkšić, I Vulić, DÓ Séaghdha, I Leviant, R Reichart… - arXiv preprint arXiv: …, 2017"],"snippet":"... The first four languages are those of the Multilingual SimLex-999 dataset. For the four SimLex languages, we employ four well-known, high-quality word vector collections: a) The Common Crawl GloVe English vectors from Pennington et al. ...","url":["https://arxiv.org/pdf/1706.00374"]} |
|
{"year":"2017","title":"Semantic vector evaluation and human performance on a new vocabulary MCQ test","authors":["JP Levy, JA Bullinaria, S McCormick"],"snippet":"... The 42B and 840B vectors were generated from 42 billion and 840 billion word corpora derived from Common Crawl archives (obtained by an automated process of systematically browsing the web). All the GloVe vectors used here have 300 dimensions. ...","url":["https://pdfs.semanticscholar.org/6506/d7783d2297f70c15a8caa07f022c36dfb168.pdf"]} |
|
{"year":"2017","title":"Semantic-based Analysis of Javadoc Comments","authors":["A Blasi, K Kuznetsov, A Goffi, SD Castellanos, A Gorla…"],"snippet":"... In our preliminary tests we found that the publicly available pre-trained word vectors of the GloVe model based on Common Crawl dataset2 already produce good results, as they identify relations such as: “if vertex exists” ; graph.containsVertex(v) and “if the graph contains the ...","url":["http://sattose.wdfiles.com/local--files/2017:schedule/SATToSE_2017_paper_24.pdf"]} |
|
{"year":"2017","title":"Semantics derived automatically from language corpora contain human-like biases","authors":["A Caliskan, JJ Bryson, A Narayanan - Science, 2017"],"snippet":"... We used the largest of the four corpora provided—the “Common Crawl” corpus obtained from a large-scale crawl of the Internet, containing 840 billion tokens (roughly, words). Tokens in this corpus are case sensitive, resulting in 2.2 million different ones. ...","url":["http://science.sciencemag.org/content/356/6334/183.abstract"]} |
|
{"year":"2017","title":"Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation","authors":["D Cer, M Diab, E Agirre, I Lopez-Gazpio, L Specia - Proceedings of the 11th …, 2017"],"snippet":"Page 1. Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 1–14, Vancouver, Canada, August 3 - 4, 2017. cO2017 Association for Computational Linguistics SemEval-2017 Task ...","url":["http://nlp.arizona.edu/SemEval-2017/pdf/SemEval001.pdf"]} |
|
{"year":"2017","title":"Sentence Embedding for Neural Machine Translation Domain Adaptation","authors":["R Wang, A Finch, M Utiyama, E Sumita"],"snippet":"... Out- of-domain corpora contained Common Crawl, Europarl v7, News Commentary v10 and United Nation (UN) EN-FR parallel corpora.4 • NIST 2006 Chinese (ZH) to English corpus 5 was used as the in-domain training corpus, following the settings of (Wang et al., 2014). ...","url":["https://www.aclweb.org/anthology/P/P17/P17-2089.pdf"]} |
|
{"year":"2017","title":"SentiHeros at SemEval-2017 Task 5: An application of Sentiment Analysis on Financial Tweets","authors":["N Tabari, A Seyeditabari, W Zadrozny"],"snippet":"... In two separate experiments, we used vectors based on the Common Crawl (840B tokens, 2.2M vo- cab, cased, 300 dimensions), and the pre-trained word vectors for Twitter (2B tweets, 27B tokens, 1.2M vocab, 200 dimensions). ...","url":["http://nlp.arizona.edu/SemEval-2017/pdf/SemEval146.pdf"]} |
|
{"year":"2017","title":"Shallow reading with Deep Learning: Predicting popularity of online content using only its title","authors":["K Marasek, P law Rokita","W Stokowiec, T Trzcinski, K Wolk, K Marasek, P Rokita - arXiv preprint arXiv: …, 2017"],"snippet":"... As a text embedding in our experiments, we use publicly available GloVe word vectors [12] pre-trained on two datasets: Wikipedia 2014 with Gigaword5 (W+G5) and Common Crawl (CC)7. Since their output dimensionality can be modified, we show the results for varying ...","url":["http://ii.pw.edu.pl/~ttrzcins/papers/ISMIS_2017_paper_57.pdf","https://arxiv.org/pdf/1707.06806"]} |
|
{"year":"2017","title":"Simple Dynamic Coattention Networks","authors":["W Wu"],"snippet":"... unk〉. This affected the accuracy of predicted answers, as seen from Table 3. To reduced the number of unknown words, the Common Crawl GloVe vectors, which has a larger vocabulary, should be used instead. Document ...","url":["https://pdfs.semanticscholar.org/6a79/6c1c9c30913cb24d64939f90dcb06fa82be7.pdf"]} |
|
{"year":"2017","title":"Six Challenges for Neural Machine Translation","authors":["P Koehn, R Knowles - arXiv preprint arXiv:1706.03872, 2017"],"snippet":"... BLEU scores of 34.5 on the WMT 2016 news test set (for the NMT model, this reflects the BLEU score re- sulting from translation with a beam size of 1). We use a single corpus for computing our lexical frequency counts (a concatenation of Common Crawl, Europarl, and News ...","url":["https://arxiv.org/pdf/1706.03872"]} |
|
{"year":"2017","title":"Sockeye: A Toolkit for Neural Machine Translation","authors":["F Hieber, T Domhan, M Denkowski, D Vilar, A Sokolov… - arXiv preprint arXiv …, 2017"],"snippet":"… 9 Page 10. EN→DE LV→EN Dataset Sentences Tokens Types Sentences Tokens Types Europarl v7/v8 1,905,421 91,658,252 862,710 637,687 27,256,803 437,914 Common Crawl 2,394,616 97,473,856 3,655,645 - - - News Comm. v12 270,088 11,990,594 460,220 …","url":["https://arxiv.org/pdf/1712.05690"]} |
|
{"year":"2017","title":"Specialising Word Vectors for Lexical Entailment","authors":["I Vulić, N Mrkšić - arXiv preprint arXiv:1710.06371, 2017"],"snippet":"... experiment with a variety of well-known, publicly available English word vectors: 1) Skip-Gram with Negative Sampling (SGNS) (Mikolov et al., 2013) trained on the Polyglot Wikipedia (Al-Rfou et al., 2013) by Levy and Goldberg (2014); 2) GLOVE Common Crawl (Pennington et ...","url":["https://arxiv.org/pdf/1710.06371"]} |
|
{"year":"2017","title":"SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering","authors":["L Xu, W Dou, C Gao, J Wang, J Wei, H Zhong, T Huang"],"snippet":"Page 1. SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering Liang Xu1,2, Wensheng Dou1*, Chushu Gao1, Jie Wang1,2, Jun Wei1,2, Hua Zhong1, Tao Huang1 1State Key Laboratory of ...","url":["http://www.tcse.cn/~wsdou/papers/2017-msr-spreadcluster.pdf"]} |
|
{"year":"2017","title":"SQuAD Question Answering using Multi-Perspective Matching","authors":["Z Maurer, S Desai, S Usmani"],"snippet":"... in some cases. In terms of future work to improve on our models, we can use 840B Common Crawl GloVe word vectors rather than the Glove word vectors pretrained on Wikipedia 2014 and Gigaword5. Given additional computational ...","url":["https://pdfs.semanticscholar.org/3b1a/a646bdc6daab268f6763b829686b00263333.pdf"]} |
|
{"year":"2017","title":"Story Cloze Ending Selection Baselines and Data Examination","authors":["M Armstrong","T Mihaylov, A Frank - arXiv preprint arXiv:1703.04330, 2017"],"snippet":"Our contribution is that we set a new baseline for the task, showing that a simple linear model based on distributed representations and semantic similarity features achieves state-of-the-art results. We also evaluate the ability of different embedding …","url":["https://arxiv.org/pdf/1703.04330","https://zdoc.pub/story-cloze-ending-selection-baselines-and-data-examination.html"]} |
|
{"year":"2017","title":"Stronger Baselines for Trustable Results in Neural Machine Translation","authors":["M Denkowski, G Neubig - arXiv preprint arXiv:1706.09733, 2017"],"snippet":"... Scenario Size (sent) Sources WMT German-English 4,562,102 Europarl, Common Crawl, news commentary WMT English-Finnish 2,079,842 Europarl, Wikipedia titles WMT Romanian-English 612,422 Europarl, SETimes IWSLT English-French 220,400 TED talks IWSLT Czech ...","url":["https://arxiv.org/pdf/1706.09733"]} |
|
{"year":"2017","title":"Structured Attention Networks","authors":["Y Kim, C Denton, L Hoang, AM Rush - arXiv preprint arXiv:1702.00887, 2017"],"snippet":"Page 1. Under review as a conference paper at ICLR 2017 STRUCTURED ATTENTION NETWORKS Yoon Kim∗ Carl Denton∗ Luong Hoang Alexander M. Rush {yoonkim@seas,carldenton@college,lhoang@g,srush@seas ...","url":["https://arxiv.org/pdf/1702.00887"]} |
|
{"year":"2017","title":"Supervised Learning of Universal Sentence Representations from Natural Language Inference Data","authors":["A Conneau, D Kiela, H Schwenk, L Barrault, A Bordes - arXiv preprint arXiv: …, 2017"],"snippet":"... 512 hidden units. We use opensource GloVe vectors trained on Common Crawl 840B2 with 300 dimensions as fixed word embeddings and initialize other word vectors to random values sampled from U(-0.1,0.1). Input sen ...","url":["https://arxiv.org/pdf/1705.02364"]} |
|
{"year":"2017","title":"SVD-Softmax: Fast Softmax Approximation on Large Vocabulary Neural Networks","authors":["K Shim, M Lee, I Choi, Y Boo, W Sung - Advances in Neural Information Processing …, 2017"],"snippet":"… 5, pp. 79–86. [29] Common Crawl Foundation, “Common crawl,” http://commoncrawl.org, 2016, Accessed: 2017-04-11. [30] Jorg Tiedemann, “Parallel data, tools and interfaces in OPUS,” in LREC, 2012, vol. 2012, pp. 2214–2218. 10 Page 11 …","url":["http://papers.nips.cc/paper/7130-svd-softmax-fast-softmax-approximation-on-large-vocabulary-neural-networks.pdf"]} |
|
{"year":"2017","title":"SwissLink: High-Precision, Context-Free Entity Linking Exploiting Unambiguous Labels","authors":["R Prokofyev, M Luggen, DE Difallah, P Cudré-Mauroux - 2017"],"snippet":"… In order to understand how annotations are used on the Web, we crawled all entity links found on two large datasets, by processing the CommonCrawl 3 and the Wikipedia dumps 4. The output of our processing is a list of all words and phrases that were used as anchors in …","url":["https://exascale.info/assets/pdf/swisslink-semantics2017.pdf"]} |
|
{"year":"2017","title":"Syntax-Directed Attention for Neural Machine Translation","authors":["K Chen, R Wang, M Utiyama, E Sumita, T Zhao - arXiv preprint arXiv:1711.04231, 2017"],"snippet":"… 4.1 Data sets The proposed methods were evaluated on two data sets. • For English (EN) to German (DE) translation task, 4.43 million bilingual sentence pairs of the WMT'14 data set was used as the training data, including Common Crawl, News Commentary and Europarl v7 …urlhttps://arxiv.org/pdf/1711.04231 |
|
year2017titleSYSTRAN Purely Neural MT Engines for WMT2017authorsY Deng, J Kim, G Klein, C Kobus, N Segal, C Servan… - WMT 2017, 2017snippet... 3.1 Corpora We used the parallel corpora made available for the shared task: Europarl v7, Common Crawl corpus, News Commentary v12 and Rapid corpus of EU press releases. Both English and German texts were preprocessed with standard tokenisation tools. ...urlhttp://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=289 |
|
year2017titleTable Identification and Reconstruction in SpreadsheetsauthorsE Koci, M Thiele, O Romero, W Lehner - International Conference on Advanced …, 2017snippet... This corpus is of a particular interest, since it provides access to real-world business spreadsheets used in industry. The third corpus is FUSE [3] that contains 249, 376 unique spreadsheets, extracted from Common Crawl 6 . ...urlhttp://link.springer.com/chapter/10.1007/978-3-319-59536-8_33 |
|
year2017titleTagging Patient Notes With ICD-9 CodesauthorsS Ayyarsnippet... For every word we obtained pretrained word vectors from Glove (Common Crawl 840 billion tokens, 2.2 million vocab of dimension size 300)[7]. Since our text consists of translated text from clinical notes, there are several misrepresentations or errors in spellings of words ...urlhttps://web.stanford.edu/class/cs224n/reports/2744196.pdf |
|
year2017titleTaking into account Inter-sentence Similarity for Update SummarizationauthorsG de Chalendar, O Ferret - Proceedings of the Eighth International Joint …, 2017snippet… MCL-GLOVE-ICSISumm. In this run, we used 2.2 million word vectors (300 dimensions) trained with GloVe (Pennington et al., 2014) on the 840 billion tokens from the Common Crawl repository. • MCL-ConceptNet-ICSISumm …urlhttp://www.aclweb.org/anthology/I17-2035 |
|
year2017titleTaxonomy Induction using Hypernym SubsequencesauthorsA Gupta, R Lebret, H Harkous, K Aberer - arXiv preprint arXiv:1704.07626, 2017snippet... A prominent ex- ample of such a resource is WebIsA [Seitner et al., 2016], a collection of more than 400 million hypernymy relations for English, extracted from the CommonCrawl web corpus using lexico-syntactic patterns. However ...urlhttps://arxiv.org/pdf/1704.07626 |
|
year2017titleTGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question AnsweringauthorsY Jang, Y Song, Y Yu, Y Kim, G Kim - arXiv preprint arXiv:1704.04497, 2017snippet... We then generate multiple choice options for each QA pair, selecting four phrases from our dataset. Specifically, we represent all verbs in our dictionary as a 300D vector using the GloVe word embedding [26] pretrained on the Common Crawl dataset. ...urlhttps://arxiv.org/pdf/1704.04497 |
|
year2017titleThe AFRL-MITLL WMT17 Systems: Old, New, Borrowed, BLEUauthorsJ Gwinnup, T Anderson, G Erdmann, K Young, M Kazi… - WMT 2017, 2017snippet... 2.1 Data Used We utilized all available data sources provided for the language pairs we participated in, including the Commoncrawl (Smith et ... For Russian we conducted monolingual selection from provided Common Crawl, to match test sets from 2012-2016 (15K lines total). ...urlhttp://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=327 |
|
year2017titleThe Effect of Translationese on Tuning for Statistical Machine TranslationauthorsS Stymnesnippet... For training we used Europarl and News commentary, provided by WMT, with a total of over 2M segments for German and French and .77M for Czech. For English→German we used additional data: bilingual Common Crawl (1.5M) and monolingual News (83M). ...urlhttp://www.ep.liu.se/ecp/131/030/ecp17131030.pdf |
|
year2017titleThe Helsinki Neural Machine Translation SystemauthorsR Östling, Y Scherrer, J Tiedemann, G Tang… - arXiv preprint arXiv: …, 2017snippet... Another common outcome in SMT is the strong impact of language models. We can confirm this once again. Adding a second language model trained on common-crawl data (CC) has a strong influence on translation quality as we can see by the BLEU scores in Table 5. ...urlhttps://arxiv.org/pdf/1708.05942 |
|
year2017titleThe HIT-SCIR System for End-to-End Parsing of Universal DependenciesauthorsW Che, J Guo, Y Wang, B Zheng, H Zhao, Y Liu… - CoNLL 2017, 2017snippet... 4.1. 2 Data and Tools We use the provided 100-dimensional multilingual word embeddings5 in our tokenization, POS tagging and parsing models, and use the Wikipedia and CommonCrawl data for training Brown clusters. The number of clusters is set to 256. ...urlhttps://www.aclweb.org/anthology/K/K17/K17-3.pdf#page=64 |
|
year2017titleThe Karlsruhe Institute of Technology Systems for the News Translation Task in WMT 2017authorsNQ Pham, J Niehues, TL Ha, E Cho, M Sperber… - WMT 2017, 2017snippet... 2.1 German↔ English As parallel data for our German↔ English systems, we used Europarl v7 (EPPS), News Commentary v12 (NC), Rapid corpus of EU press releases, Common Crawl corpus, and simulated data. Except ...urlhttp://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=390 |
|
year2017titleThe RWTH Aachen University English-German and German-English Machine Translation System for WMT 2017authorsJT Peter, A Guta, T Alkhouli, P Bahar, J Rosendahl…snippet... Both models are trained on all monolingual corpora, except the commoncrawl corpus, and the target side of the bilingual data (Section 4.2), which sums up to 365.44M sentences and 7230.15M running words, respectively. ...urlhttps://www-i6.informatik.rwth-aachen.de/publications/download/1048/PeterJan-ThorstenGutaAndreasAlkhouliTamerBaharParniaRosendahlJanRossenbachNickGra%E7aMiguelNeyHermann--TheRWTHAachenUniversityEnglish-GermanGerman-EnglishMachineTranslationSystemforWMT2017--2017.pdf |
|
year2017titleThe TALP-UPC Neural Machine Translation System for German/Finnish-English Using the Inverse Direction Model in RescoringauthorsC Escolano, MR Costa-jussà, JAR Fonollosa - … of the Second Conference on Machine …, 2017snippet... 4.1 Data and Preprocess For the three language pairs that we experimented with, we used all data parallel data available in the evaluation1. For German-English, we used: europarl v.7, news commentary v.12, common crawl and rapid corpus of EU press releases. ...urlhttp://www.aclweb.org/anthology/W17-4725 |
|
year2017titleThe UMD Machine Translation Systems at IWSLT 2016: English-to-French Translation of Speech TranscriptsauthorsX Niu, M Carpuat - Proceedings of the ninth International Workshop on …, 2016snippet... Corpus # Sentences # Words (en/fr) OpenSubtitles 33.5 M 284.0 M / 268.3 M MultiUN 13.2 M 367.1 M / 432.3 M Common Crawl 3.2 M 81.1 M / 91.3 M Europarl v7 2.0 M 55.7 M / 61.9 M Wikipedia 396 k 9.7 M / 8.7 M TED corpus 207 k 4.5 M / 4.8 M News Commentary v10 199 k ...urlhttp://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_26.pdf |
|
year2017titleThe UMD Neural Machine Translation Systems [at WMT17 Bandit Learning TaskauthorsA Sharaf, S Feng, K Nguyen, K Brantley, H Daumé III - arXiv preprint arXiv: …, 2017snippet... sider 40k sentences). Using this monolingual data, we use data selection on a large corpus of parallel out-of-domain data (Europarl, NewsCommentary, CommonCrawl, Rapid) to seed an initial translation model. Overall, the ...urlhttps://arxiv.org/pdf/1708.01318 |
|
year2017titleThe University of Edinburgh's Neural MT Systems for WMT17","authors":["R Sennrich, A Birch, A Currey, U Germann, B Haddow… - arXiv preprint arXiv: …, 2017"],"snippet":"... the whole of CzEng 1.6pre (Bojar et al., 2016), plus the latest WMT releases of Europarl, News-commentary and CommonCrawl... We use the following resources from the WMT parallel data: News Commentary v12, Common Crawl, Yandex Corpus and UN Parallel Corpus V1.0 ...","url":["https://arxiv.org/pdf/1708.00726"]} |
|
{"year":"2017","title":"The University of Edinburgh's systems submission to the MT task at IWSLTauthorsM Junczys-Dowmunt, A Birch - Proceedings of the ninth International Workshop on …, 2016snippet... Commoncrawl [3] 2.3M 3.2M Europarl v7 [4] 1.9M 2.0M Giga Fr-En [3] – 22.5M News Commentary v11 [3] 0.2M 0.2M Opensubtitles 2016 [5] 13.4M 33.5M ... [7] C. Buck, K. Heafield, and B. van Ooyen, “N-gram counts and language models from the common crawl,” in Proceedings ...urlhttp://workshop2016.iwslt.org/downloads/IWSLT_2016_paper_27.pdf |
|
year2017titleThe Web Data Commons Structured Data ExtractionauthorsA Primpeli, R Meusel, C Bizer, H Stuckenschmidt - 2017snippet... for the year 2016. The Web Data Commons project extracts structured data from the web corpus provided by Common Crawl, the largest public web corpus, and offers the extracted data for public download. In order to process ...urlhttp://archiv.ub.uni-heidelberg.de/volltextserver/22891/ |
|
year2017titleTo Parse or Not to Parse: An Experimental Comparison of RNTNs and CNNs for Sentiment AnalysisauthorsZ Ahmadi, A Stier, M Skowron, S Kramersnippet... On other datasets, we use the model trained on the web data from Common Crawl which contains a case-sensitive vocabulary of size 2.2 million. In all the experiments, the size of the word vector, the minibatch and the epochs were set to 25, 20 and 100, respectively. ...urlhttp://ceur-ws.org/Vol-1874/paper_1.pdf |
|
year2017titleTO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING:“TAIGA” SYNTAX TREE CORPUS AND PARSERauthorsTO Shavrina, O Shapovalova - КОРПУСНАЯ ЛИНГВИСТИКА–2017snippet… For Russian language, large collections of web corpora are assembled—projects like RuTenTen and Aranea Russicum, which are not available for downloading, unlike resources based on Common Crawl, but all these corpora are crawled from an unbalanced set of links and …urlhttps://dspace.spbu.ru/bitstream/11701/8786/1/%D0%9A%D0%BE%D1%80%D0%BF%D1%83%D1%81%D0%BD%D0%B0%D1%8F%20%D0%BB%D0%B8%D0%BD%D0%B3%D0%B2%D0%B8%D1%81%D1%82%D0%B8%D0%BA%D0%B0-2017%20%28%D1%82%D1%80%D1%83%D0%B4%D1%8B%20%D0%BC%D0%B5%D0%B6%D0%B4.%20%D0%BA%D0%BE%D0%BD%D1%84%D0%B5%D1%80.%29.pdf#page=78 |
|
year2017titleTopics in Data Science/Өгөгдлийн шинжлэх ухаанauthorsR Womack - 2017snippetPage 1. Topics in Data Science / Өгөгдлийн шинжлэх ухаан Rutgers University has made this article freely available. Please share how this access benefits you. Your story matters. [https://rucore.libraries.rutgers.edu/rutgers-lib/52378/story/] …urlhttps://rucore.libraries.rutgers.edu/rutgers-lib/52378/PDF/1/ |
|
year2017titleToward intrusion detection using belief decision trees for big dataauthorsI Boukhris, Z Elouedi, M Ajabi - Knowledge and Information Systems, 2017snippetPage 1. Knowl Inf Syst DOI 10.1007/s10115-017-1034-4 REGULAR PAPER Toward intrusion detection using belief decision trees for big data Imen Boukhris1 · Zied Elouedi1 · Mariem Ajabi1 Received: 3 December 2015 / Accepted ...urlhttp://link.springer.com/article/10.1007/s10115-017-1034-4 |
|
year2017titleTowards Accurate Duplicate Bug Retrieval Using Deep Learning TechniquesauthorsJ Deshmukh, S Podder, S Sengupta, N Dubash - Software Maintenance and …, 2017snippet… Each word in the dictionary was then mapped to its corresponding embedding. We experimented with both GloVe vectors trained3 on Common Crawl dataset as well as Word2Vec vectors trained4 on Google news dataset. We …urlhttp://ieeexplore.ieee.org/abstract/document/8094414/ |
|
year2017titleTowards Automatic Identification of Fake News: Headline-Article Stance Detection with LSTM Attention ModelsauthorsS Chopra, S Jain, JM Sholar - 2017snippet... 3 Page 4. of Wikipedia and Common Crawl. We further created a randomly initialized UNK vector of zeros, for words that were not found in the GloVe set. 5.4 LSTM Attention Architectures 5.4.1 Conditionally Encoded (CE) LSTMs ...urlhttps://pdfs.semanticscholar.org/eecc/5781c826a0af8229b8a24a6fca3d3e48b0fa.pdf |
|
year2017titleTowards Automatically Evaluating Security Risks and Providing Cyber IntelligenceauthorsX Liao - 2017snippetPage 1. TOWARDS AUTOMATICALLY EVALUATING SECURITY RISKS AND PROVIDING CYBER INTELLIGENCE A Thesis Presented to The Academic Faculty by Xiaojing Liao In Partial Fulfillment of the Requirements for ...urlhttps://smartech.gatech.edu/bitstream/handle/1853/58679/LIAO-DISSERTATION-2017.pdf?sequence=1&isAllowed=y |
|
year2017titleTowards Document-Level Neural Machine TranslationauthorsL Miculicich Werlen - 2017snippetPage 1. TROPE R HCRAESE R PAID I TOWARDS DOCUMENT-LEVEL NEURAL MACHINE TRANSLATION Lesly Miculicich Werlen Idiap-RR-25-2017 SEPTEMBER 2017 Centre du Parc, Rue Marconi 19, PO Box 592, CH ...urlhttps://infoscience.epfl.ch/record/231129/files/MiculicichWerlen_Idiap-RR-25-2017.pdf |
|
year2017titleTowards Semantic Query SegmentationauthorsA Kale, T Taula, S Hewavitharana, A Srivastava - arXiv preprint arXiv:1707.07835, 2017snippet... estimators. This process was repeated with pretrained GloVe vectors on common crawl [14] and facebook fasttext [2] pretrained model over Wikipedia corpus with 2.5M word vocabulary. 2 shows the experiment results. We ...urlhttps://arxiv.org/pdf/1707.07835 |
|
year2017titleTowards the ImageNet-CNN of NLP: Pretraining Sentence Encoders with Machine TranslationauthorsB McCann, J Bradbury, C Xiong, R Socher - Advances in Neural Information …, 2017snippet… When training an MT-LSTM, we used fixed 300-dimensional word vectors. We used the CommonCrawl-840B GloVe model for English word vectors, which were completely fixed during training, so that the MT-LSTM had to learn how to use the pretrained vectors for translation …urlhttp://papers.nips.cc/paper/7209-towards-the-imagenet-cnn-of-nlp-pretraining-sentence-encoders-with-machine-translation.pdf |
|
year2017titleTraininG towards a society of data-saVvy inforMation prOfessionals to enable open leadership INnovationauthorsT Blume, F Böschen, L Galke, A Saleh, A Scherp - 2017snippetPage 1. Deliverable 3.1: Technologies for MOVING data processing and visualisation v1.0 Till Blume, Falk Böschen, Lukas Galke, Ahmed Saleh, Ansgar Scherp, Matthias Schulte-Althoff/ZBW Chrysa Collyda, Vasileios Mezaris, Alexandros Pournaras, Christos Tzelepis/CERTH ...urlhttp://moving-project.eu/wp-content/uploads/2017/04/moving_d3.1_v1.0.pdf |
|
year2017titleTranslation Quality and Productivity: A Study on Rich Morphology LanguagesauthorsL Specia, K Harris, F Blain, A Burchardt, V Macketanz…snippet... This process resulted in: • EN–DE: Over 20 million generic and in-domain sentence pairs obtained by merging the datasets available in the OPUS (Tiedemann, 2012), TAUS, WMT and JRC 3 repositories (eg Europarl, CDEP, CommonCrawl, etc.); ...urlhttps://fredblain.org/papers/pdf/specia_et_al_2017_translation_quality_and_productivity.pdf |
|
year2017titleTranslation Quality Estimation Using only bilingual CorporaauthorsL Liu, A Fujita, M Utiyama, A Finch, E Sumita - IEEE/ACM Transactions on Audio, …, 2017snippet... languages. As the bilingual corpora for conducting M2LE training, we employed Europarl and Common Crawl provided by WMT13 for the WMT15 and WMT14 tasks and a Japanese–Chinese bilingual corpus [9] for the JA2ZH task. ...urlhttp://ieeexplore.ieee.org/abstract/document/7949019/ |
|
year2017titleTSP: Learning Task-Specific Pivots for Unsupervised Domain AdaptationauthorsX Cui, F Coenen, D Bollegalasnippet... We use the publicly available D = 300 dimensional GloVe4 (trained using 42B tokens from the Common Crawl) and CBOW5 (trained using 100B tokens from Google News) embeddings as the word representations required by TSP. ...urlhttps://cgi.csc.liv.ac.uk/~danushka/papers/Xia_ECML_2017.pdf |
|
year2017titleTwo-Stage Synthesis Networks for Transfer Learning in Machine ComprehensionauthorsD Golub, PS Huang, X He, L Deng - arXiv preprint arXiv:1706.09789, 2017snippet... We initialize word-embeddings for the BIDAF model, answer synthesis module, and question synthesis module with 300-dimensional-GloVe vectors (Pennington et al., 2014) trained on the 840B Common Crawl corpus. We set all embeddings of unknown word tokens to zero. ...urlhttps://arxiv.org/pdf/1706.09789 |
|
year2017titleTwo-Step MT: Predicting Target MorphologyauthorsF Burlot, E Knyazeva, T Lavergne, F Yvon - 2016snippet... from TED training set Full TED set (117k) + QED (242k) + europarl (885k) + news-commentary (1M) Monolingual data (various subsets ranging from 5M to 200M): Target side of the biggest parallel corpus Czeng-1.6-pre subtitles news corpora (WMT'16) common-crawl (WMT'16 ...urlhttp://workshop2016.iwslt.org/downloads/IWSLT16_Burlot.pdf |
|
year2017titleUnbounded cache model for online language modeling with open vocabularyauthorsE Grave, M Cisse, A Joulin - arXiv preprint arXiv:1711.02604, 2017snippet... In the following, we refer to this dataset as commentary. • Common Crawl is a text dataset collected from diverse web sources. The dataset is shuffled at the sentence level. ... [9] C. Buck, K. Heafield, and B. van Ooyen. N-gram counts and language models from the common crawl...urlhttps://arxiv.org/pdf/1711.02604 |
|
year2017titleUnderstanding and Predicting the Usefulness of Yelp ReviewsauthorsDZ Liusnippet... I concatenate output from both RNNs to make the final prediction. (figure 1) [1] https://www.yelp. com/dataset_challenge [2] Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors): glove.42B.300d.zip from http://nlp.stanford.edu/projects/glove/ Page 4. ...urlhttps://web.stanford.edu/class/cs224n/reports/2760995.pdf |
|
year2017titleUnderstanding Regional Context of World Wide Web using Common Crawl CorpusauthorsMA Mehmood, HM Shafiq, A WaheedsnippetAbstract—The World Wide Web has emerged as the most important and essential tool for the society. Today, people heavily rely on rich resources available in the web for communication, business, maps, and social networking etc. In addition, people seek weburlhttps://www.researchgate.net/profile/Amir_Mehmood/publication/321489200_Understanding_Regional_Context_of_World_Wide_Web_using_Common_Crawl_Corpus/links/5a251abaaca2727dd87e780a/Understanding-Regional-Context-of-World-Wide-Web-using-Common-Crawl-Corpus.pdf |
|
year2017titleUnderstanding Spreadsheet Evolution in PracticeauthorsL Xu - Software Maintenance and Evolution (ICSME), 2017 …, 2017snippet… IEEE International Conference on Software Engineering (ICSE), 2015, pp. 7– 16. [28] “Common crawl data on AWS.” [Online]. Available: http://aws.amazon.com/datasets/ 41740. [29] C. Chambers, M. Erwig, and M. Luckey, “SheetDiff …urlhttp://ieeexplore.ieee.org/abstract/document/8094479/ |
|
year2017titleUnsupervised Neural Machine TranslationauthorsM Artetxe, G Labaka, E Agirre, K Cho - arXiv preprint arXiv:1710.11041, 2017snippet... For that purpose, we used the combination of all parallel corpora provided at WMT 2014, which comprise Europarl, Common Crawl and News Commentary for both language pairs plus the UN and the Gigaword corpus for FrenchEnglish. ...urlhttps://arxiv.org/pdf/1710.11041 |
|
year2017titleUsing Distributional Semantics for Automatic Taxonomy InductionauthorsB Zafar, M Cochez, U Qamarsnippet... system. They used general and domain specific corpora such as GigaWord, ukWac etc. and the common crawl to extract lexico-syntactic patterns. Additionally, they applied pruning methods to refine the generated taxonomy. ...urlhttp://users.jyu.fi/~miselico/papers/distributional-semantics-taxonomy.pdf |
|
year2017titleUsing images to improve machine-translating e-commerce product listingsauthorsI Calixto, D Stein, E Matusov, P Lohar, S Castilho… - EACL 2017, 2017snippet... Table 2 we show the number of running words as well as the perplexity scores obtained with LMs trained on three sets of different German corpora: the Multi30k, eBay's in-domain data and a concatenation of the WMT 20152 Europarl (Koehn, 2005), Common Crawl and News ...","url":["https://www.aclweb.org/anthology/E/E17/E17-2.pdf#page=669"]} |
|
{"year":"2017","title":"Using Recurrent Neural Network to Predict The Usefulness of Yelp Reviews","authors":["DZ Liu, G Singh"],"snippet":"... The frequency of alternation is a hyper-parameter Figure 2: MTL RNN structure with detailed input and output description [2] Common Crawl (42B tokens, 1.9M vocab, uncased, 300d vectors): glove.42B.300d.zip from http://nlp.stanford.edu/projects/glove/ Page 4. ...","url":["https://web.stanford.edu/class/cs221/2017/restricted/p-final/dzliu/final.pdf"]} |
|
{"year":"2017","title":"UWat-Emote at EmoInt-2017: Emotion Intensity Detection using Affect Clues, Sentiment Polarity and Word Embeddings","authors":["V John, O Vechtomova - EMNLP 2017, 2017"],"snippet":"... GloVe Model-Tweets (GV-T), Wikipedia + Gigaword (GV-WG), Common Crawl 42B tokens (GV-CC1), Common Crawl 840B tokens (GV-CC2): GloVe is similar to Word2Vec, in that it obtains dense vector representations of words. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-52.pdf#page=265"]} |
|
{"year":"2017","title":"Variable length word encodings for neural translation models","authors":["J Gao"],"snippet":"Page 1. Variable length word encodings for neural translation models Jiameng Gao Department of Engineering University of Cambridge This dissertation is submitted for the degree of Master of Philosophy Peterhouse August 11, 2016 Page 2. Page 3. Page 4. Page 5. ...","url":["http://www.mlsalt.eng.cam.ac.uk/foswiki/pub/Main/CurrentMPhils/Jiameng_Gao_8224881_assignsubmission_file_J_Gao_MPhil_dissertation.pdf"]} |
|
{"year":"2017","title":"VecShare: A Framework for Sharing Word Representation Vectors","authors":["J Fernandez, Z Yu, D Downey"],"snippet":"... we utilize three sets of GloVe embeddings (Pennington et al., 2014): wik+, 100-dimensional embeddings trained on six billion tokens of Wikipedia and the Gigaword corpus; web, 300-dimensional embeddings trained on 42 billion tokens of the Common Crawl Web dataset ...","url":["http://www.cs.northwestern.edu/~ddowney/publications/vecshare_fernandez_2017.pdf"]} |
|
{"year":"2017","title":"Vector Space Representations in Information Retrieval","authors":["V Novotný"],"snippet":"Page 1. Masaryk University Faculty of Informatics Vector Space Representations in Information Retrieval Master's Thesis Vít Novotný Brno, Fall 2017 Page 2. Page 3. Masaryk University Faculty of Informatics Vector Space Representations in Information Retrieval Master's Thesis …","url":["https://is.muni.cz/th/409729/fi_m/main.pdf"]} |
|
{"year":"2017","title":"Visual Exploration of High-Dimensional Spaces Through Identification, Summarization, and Interpretation of Two-Dimensional Projections","authors":["S Liu - 2017"],"snippet":"Visual Exploration of High-Dimensional Spaces Through Identification, Summarization, and Interpretation of Two-Dimensional Projections. Abstract. With the ever-increasing amount of available computing resources and sensing ...","url":["http://search.proquest.com/openview/521292ce267e4e2b78aa24b8452c5a8d/1?pq-origsite=gscholar&cbl=18750&diss=y"]} |
|
{"year":"2017","title":"Visually Grounded Word Embeddings and Richer Visual Features for Improving Multimodal Neural Machine Translation","authors":["JB Delbrouck, S Dupont, O Seddati - arXiv preprint arXiv:1707.01009, 2017"],"snippet":"... to. As previously mentioned, the textual representation lwi is obtained with the word embeddings algorithm Glove. We use the pre-trained model on the Common Crawl corpus consisting of 840B tokens and a 2.2M words. The ...","url":["https://arxiv.org/pdf/1707.01009"]} |
|
{"year":"2017","title":"Web-scale profiling of semantic annotations in HTML pages","authors":["R Meusel - 2017"],"snippet":"... In 2012, the Common Crawl Foundation (CC)8 started to continuously release crawled web corpora of a decent size and made them publicly available. Each of the corpora contains several tera-bytes of compressed HTML pages. ...","url":["https://ub-madoc.bib.uni-mannheim.de/41884/1/thesis_final_rm_20170322-1.pdf"]} |
|
{"year":"2017","title":"Web-Scale Web Table to Knowledge Base Matching","authors":["D Ritze - 2017"],"snippet":"… 43 4.2.1 Common Crawl … Page 20. 12 CHAPTER 1. INTRODUCTION 1.4 Published Work Parts of the work presented in this thesis have been published previously: • The extraction of the WDC Web Table Corpus from the Common Crawl …","url":["https://ub-madoc.bib.uni-mannheim.de/43123/1/thesis.pdf"]} |
|
{"year":"2017","title":"What's good for the goose is good for the GANderauthorsC Hung, B Corcoransnippet... To reduce the percentage of un- known words, we additionally brought down the size of our vocabulary to contain only the 10k most commonly used words in the training set; and used GloVe vectors (Pennington et al., 2014), pretrained on Common Crawl (having around 42B ...urlhttps://web.stanford.edu/class/cs224n/reports/2761035.pdf |
|
year2017titleWord Embeddings for Practical Information RetrievalauthorsL Galke, A Saleh, A Scherp - INFORMATIK 2017, 2017snippet... 2 zbw.eu/stw 3 A dataset of crawled web data from https://commoncrawl.org/ Word Embeddings for Similarity Scoring in Practical Information Retrieval 2161 Page 8. i “proceedings” — 2017/8/24 — 12:20 — page 2162 — #2162 i i i ...urlhttps://dl.gi.de/bitstream/handle/20.500.12116/3987/B29-2.pdf?sequence=1 |
|
year2017titleWord Embeddings Quantify 100 Years of Gender and Ethnic StereotypesauthorsN Garg, L Schiebinger, D Jurafsky, J Zou - arXiv preprint arXiv:1711.08412, 2017snippet… nearly identical correlation. We further validate this association using different embeddings trained on Wikipedia and Common Crawl texts instead of Google News; see Appendix Section B.1 for details. Google News embedding …urlhttps://arxiv.org/pdf/1711.08412 |
|
year2017titleWord Re-Embedding via Manifold Dimensionality RetentionauthorsS Hasan, E Curry - Proceedings of the 2017 Conference on Empirical …, 2017snippet... Original Embedding Spaces. The original word embeddings used are pre-trained GloVe models: Wikipedia 2014 + Gigaword 5 (6B tokens, 400K vocab, 50d, 100d, 200d, & 300d vectors), and Common Crawl (42B tokens, 1.9M vocab, 300d vectors) (Pennington et al., 2014b). ...urlhttp://www.aclweb.org/anthology/D17-1033 |
|
year2017titleWord vectors, reuse, and replicability: Towards a community repository of large-text resourcesauthorsM Fares, A Kutuzov, S Oepen, E Velldalsnippet... Moreover, with an ac- curacy of 83.08 for the semantic analogies, the GloVe model trained on the lemmatized version of Wikipedia outperforms the GloVe model trained on 42 billion tokens of web data from the Common Crawl reported in (Pennington et al., 2014), which at an ...urlhttp://www.ep.liu.se/ecp/131/037/ecp17131037.pdf |
|
|