{"year":"2021","title":"'I'm just feeling like it'. On the relationship between the use of the progressive and sentiment polarity in Italian","authors":["L Viola"],"snippet":"… art transformer-based machine learning model for emotion and sentiment classification in Italian which employs the Italian BERT model UmBERTo trained on Commoncrawl ITA (Parisi, Francia, and Magnani [2020] 2021). For …","url":["https://www.uib.no/sites/w3.uib.no/files/attachments/viola.pdf"]} {"year":"2021","title":"4. Unlocking value from AI in financial services: strategic and organizational tradeoffs vs. media narratives","authors":["G Lanzolla, S Santoni, C Tucci - Artificial Intelligence for Sustainable Value Creation, 2021"],"snippet":"Page 87. 4. Unlocking value from AI in financial services: strategic and organizational tradeoffs vs. media narratives Gianvito Lanzolla, Simone Santoni and Christopher Tucci 1. INTRODUCTION In 1955, McCarthy wrote that …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=_9BCEAAAQBAJ&oi=fnd&pg=PA70&dq=commoncrawl&ots=Z-4LjY9D6U&sig=BHpJ4i9Wq18ZWZDIWoGm5BnHqSY"]} {"year":"2021","title":"6 Data Collection and Representation for Similar Languages, Varieties and Dialects","authors":["T Samardžic, N Ljubešic - Similar Languages, Varieties, and Dialects: A …, 2021"],"snippet":"… Page 146. Data Collection and Representation for Similar Languages 127 Another project that should be mentioned in this brief overview is the CommonCrawl, a project performing crawls over the whole internet for textual data since 2013 with regular data updates …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=hhA5EAAAQBAJ&oi=fnd&pg=PA121&dq=commoncrawl&ots=2XimIiF4W6&sig=XlIzLoiwAxuodhBmJeC_iS9BSeg"]} {"year":"2021","title":"\" Short is the Road that Leads from Fear to Hate\": Fear Speech in Indian WhatsApp Groups","authors":["P Saha, B Mathew, K Garimella, A Mukherjee - arXiv preprint arXiv:2102.03870, 2021"],"snippet":"Page 1. “Short is the Road that Leads from Fear to Hate”: Fear Speech in Indian WhatsApp Groups Punyajoy Saha punyajoys@iitkgp.ac.in Indian Institute of Technology Kharagpur, West Bengal, India Binny Mathew binnymathew …","url":["https://arxiv.org/pdf/2102.03870"]} {"year":"2021","title":"A common framework for quantifying the learnability of nouns and verbs","authors":["Y Zhou, D Yurovsky - Proceedings of the Annual Meeting of the Cognitive …, 2021"],"snippet":"… We used pre-trained 300-dimensional semantic vectors derived from the the Common Crawl corpus composed of 840 billion tokens and 2.2 million words. For our analysis, we considered only the words that corresponded to the relevant 434 images. Procedure …","url":["https://escholarship.org/content/qt8dn6k82j/qt8dn6k82j.pdf"]} {"year":"2021","title":"A Comparative Study on Word Embeddings in Deep Learning for Text Classification","authors":["C Wang, P Nulty, D Lillis"],"snippet":"… 3https://nlp.stanford.edu/projects/glove/ 4https://commoncrawl.org/ 5https://fasttext.cc/ 6https://allennlp.org/elmo 7We additionally experimented from the fourth-to-last (-4) layer to the last layer … 300s refers to the GloVe …","url":["https://lill.is/pubs/Wang2020a.pdf"]} {"year":"2021","title":"A Comparison Framework for Product Matching Algorithms","authors":["J Foxcroft - 2021"],"snippet":"Page 1. A Comparison Framework for Product Matching Algorithms by Jeremy Foxcroft A Thesis presented to The University of Guelph In partial fulfilment of requirements for the degree of Master of Science in Computer Science Guelph, Ontario, Canada …","url":["https://atrium.lib.uoguelph.ca/xmlui/bitstream/handle/10214/26375/Foxcroft_Jeremy_202109_Msc.pdf?sequence=3"]} {"year":"2021","title":"A Comparison of Approaches to Document-level Machine Translation","authors":["Z Ma, S Edunov, M Auli - arXiv preprint arXiv:2101.11040, 2021"],"snippet":"… WMT17 English-German (en-de). For this benchmark, we follow the setup of Müller et al. (2018) whose training data includes the Europarl, Common Crawl, News Commentary and Rapid corpora, totaling nearly 6M sentence pairs …","url":["https://arxiv.org/pdf/2101.11040"]} {"year":"2021","title":"A Comprehensive Assessment of Dialog Evaluation Metrics","authors":["YT Yeh, M Eskenazi, S Mehri - arXiv preprint arXiv:2106.03706, 2021"],"snippet":"… RoBERTa, which is employed in USR (Mehri and Eskenazi, 2020b), improves the training techniques in BERT and trains the model on a much larger corpus which includes the CommonCrawl News dataset (Mackenzie et al., 2020) and text ex- tracted from Reddit …","url":["https://arxiv.org/pdf/2106.03706"]} {"year":"2021","title":"A Comprehensive Survey and Experimental Comparison of Graph-Based Approximate Nearest Neighbor Search","authors":["M Wang, X Xu, Q Yue, Y Wang - arXiv preprint arXiv:2101.12631, 2021"],"snippet":"Page 1. A Comprehensive Survey and Experimental Comparison of Graph-Based Approximate Nearest Neighbor Search Mengzhao Wang1, Xiaoliang Xu1, Qiang Yue1, Yuxiang Wang1,∗ 1Hangzhou Dianzi University …","url":["https://arxiv.org/pdf/2101.12631"]} {"year":"2021","title":"A Comprehensive Survey of Grammatical Error Correction","authors":["Y Wang, Y Wang, K Dang, J Liu, Z Liu - ACM Transactions on Intelligent Systems and …, 2021"],"snippet":"Grammatical error correction (GEC) is an important application aspect of natural language processing techniques, and GEC system is a kind of very important intelligent system that has long been explored both in academic and industrial …","url":["https://dl.acm.org/doi/abs/10.1145/3474840"]} {"year":"2021","title":"A Computational Framework for Slang Generation","authors":["Z Sun, R Zemel, Y Xu - arXiv preprint arXiv:2102.01826, 2021"],"snippet":"… To compare with and compute the baseline em- bedding methods M for definition sentences, we used 300-dimensional fastText embeddings (Bo- janowski et al., 2017) pre-trained with subword information on 600 billion …","url":["https://arxiv.org/pdf/2102.01826"]} {"year":"2021","title":"A coral-reef approach to extract information from HTML tables","authors":["P Jiménez Aguirre, JC Roldán Salvador… - Applied Soft Computing …, 2022","P Jiménez, JC Roldán, R Corchuelo - Applied Soft Computing, 2021"],"snippet":"… Unfortunately, a recent analysis of the 32.04 million domains in the November 2019 Common Crawl has revealed that only 11.92 million domains provide such semantic hints [10], which argues for a method to deal with the remaining 20.12 …","url":["https://idus.us.es/bitstream/handle/11441/131990/1/1-s2.0-S1568494621009029-main.pdf?sequence=1","https://www.sciencedirect.com/science/article/pii/S1568494621009029"]} {"year":"2021","title":"A COVID-19 news coverage mood map of Europe","authors":["F Robertson, J Lagus, K Kajava - Proceedings of the EACL Hackashop on News …, 2021"],"snippet":"… Newscrawl is a web crawl provided by the Common Crawl organisation which is updated more frequently and contains only data from news websites2. In order to keep the size of the corpus manageable and the extraction …","url":["https://www.aclweb.org/anthology/2021.hackashop-1.15.pdf"]} {"year":"2021","title":"A data quality approach to the identification of discrimination risk in automated decision making systems","authors":["A Vetrò, M Torchiano, M Mecati - Government Information Quarterly, 2021"],"snippet":"… Similarly, a scientific experiment on the search engine Common Crawl (De-Arteaga et al., 2019) revealed an unequal treatment due to gender imbalance in the input data (almost 400,000 biographies): authors compared …","url":["https://www.sciencedirect.com/science/article/pii/S0740624X21000551"]} {"year":"2021","title":"A data-centric review of deep transfer learning with applications to text data","authors":["S Bashath, N Perera, S Tripathi, K Manjang, M Dehmer… - Information Sciences, 2021"],"snippet":"Abstract In recent years, many applications are using various forms of deep learning models. Such methods are usually based on traditional learning paradigms requiring the consistency of properties among the feature spaces of the training and …","url":["https://www.sciencedirect.com/science/article/pii/S002002552101183X"]} {"year":"2021","title":"A deep learning-based bilingual Hindi and Punjabi named entity recognition system using enhanced word embeddings","authors":["A Goyal, V Gupta, M Kumar - Knowledge-Based Systems, 2021"],"snippet":"… Initially, we collect Facebook’s pre-trained FastText embeddings which are trained on Wikipedia and common crawl data with 300 dimensions for our Hindi and Punjabi datasets. But after experiments, we find many of the words in our dataset are …","url":["https://www.sciencedirect.com/science/article/pii/S0950705121008637"]} {"year":"2021","title":"A Framework for Generating Extractive Summary from Multiple Malayalam Documents","authors":["K Manju, S David Peter, SM Idicula - Information, 2021"],"snippet":"… Semantically similar words are mapped to nearby points in the vector space. In this work the vectorization of the terms in the document are performed using the pretrained word embedding model FastText for Malayalam, trained on Common Crawl and Wikipedia …","url":["https://www.mdpi.com/2078-2489/12/1/41/pdf"]} {"year":"2021","title":"A Framework for Quality Assessment of Semantic Annotations of Tabular Data","authors":["R Avogadro, M Cremaschi, E Jiménez-Ruiz, A Rula - International Semantic Web …, 2021"],"snippet":"… 1 Introduction. Much information is conveyed within tables. A prominent example is the large set of relational databases or tabular data present on the Web. To size the spread of tabular data, 2.5M tables have been …","url":["https://link.springer.com/chapter/10.1007/978-3-030-88361-4_31"]} {"year":"2021","title":"A Fusion Approach for Paper Submission Recommendation System","authors":["ST Huynh, N Dang, PT Huynh, DH Nguyen, BT Nguyen - International Conference on …, 2021"],"snippet":"… Finally, we use crawl-300d-2M 3 as the pre-train embedding matrix, which has 600 billion tokens and 2 million word vectors trained on Common Crawl. It can make using crawl-300d-2M more efficiently in vectorization. As depicted in Fig …","url":["https://link.springer.com/chapter/10.1007/978-3-030-79463-7_7"]} {"year":"2021","title":"A General Language Assistant as a Laboratory for Alignment","authors":["A Askell, Y Bai, A Chen, D Drain, D Ganguli… - arXiv preprint arXiv …, 2021"],"snippet":"… For language model pre-training, these models are trained for 400B tokens on a distribution consisting mostly of filtered common crawl … The natural language dataset was composed of 55% heavily filtered common crawl data (220B tokens), 32 …","url":["https://arxiv.org/pdf/2112.00861"]} {"year":"2021","title":"A Heuristic-driven Ensemble Framework for COVID-19 Fake News Detection","authors":["SD Das, A Basak, S Dutta - arXiv preprint arXiv:2101.03545, 2021"],"snippet":"… of model-specific special tokens. Each model also has its corresponding vocabulary associated with its tokenizer, trained on a large corpus data like GLUE, wikitext-103, CommonCrawl data etc. During training, each model …","url":["https://arxiv.org/pdf/2101.03545"]} {"year":"2021","title":"A Heuristic-driven Uncertainty based Ensemble Framework for Fake News Detection in Tweets and News Articles","authors":["SD Das, A Basak, S Dutta - arXiv preprint arXiv:2104.01791, 2021"],"snippet":"… Each model also has its corresponding vocabulary associated with its tokenizer, trained on a large corpus data like GLUE, wikitext-103, CommonCrawl data etc. During training, each model applies the tokenization …","url":["https://arxiv.org/pdf/2104.01791"]} {"year":"2021","title":"A Human Being Wrote This Law","authors":["AB Cyphert"],"snippet":"… GPT-3 had an impressively large data training set: it was trained on the Common Crawl dataset, a nearly trillion-word dataset,22 which includes everything from traditional news sites like the New York Times to sites like Reddit.The Common …","url":["https://lawreview.law.ucdavis.edu/issues/55/1/articles/files/55-1_Cyphert.pdf"]} {"year":"2021","title":"A Literature Survey of Recent Advances in Chatbots","authors":["G Caldarini, S Jaf, K McGarry - 2021"],"snippet":"… This led to the development of pretrained systems such as BERT (Bidirectional Encoder Representations from transformers) [46] and GPT (Generative Pre-trained Transformer), which were trained with huge language datasets, such as Wikipedia …","url":["https://www.preprints.org/manuscript/202112.0265/download/final_file"]} {"year":"2021","title":"A Mechanism for Producing Aligned Latent Spaces with Autoencoders","authors":["S Jain, A Radhakrishnan, C Uhler - arXiv preprint arXiv:2106.15456, 2021"],"snippet":"… 6.1 Alignment of GloVe Embeddings In this section, we apply our theory to align semantic/syntactic directions in GloVe word embeddings [21]. We use 300 dimensional GloVe vectors that were trained on Common Crawl with 840 billion tokens …","url":["https://arxiv.org/pdf/2106.15456"]} {"year":"2021","title":"A Multi-Platform Analysis of Political News Discussion and Sharing on Web Communities","authors":["Y Wang, S Zannettou, J Blackburn, B Bradlyn… - arXiv preprint arXiv …, 2021"],"snippet":"… supported types of entities). The model re- lies on Convolutional Neural Networks (CNNs), trained on the OntoNotes dataset [90], as well as Glove vectors [62] trained on the Common Crawl dataset [17]. 2.3 News Stories Identification …","url":["https://arxiv.org/pdf/2103.03631"]} {"year":"2021","title":"A Multi-Task Learning Model for Multidimensional Relevance Assessment","authors":["DGP Putri, M Viviani, G Pasi - International Conference of the Cross-Language …, 2021"],"snippet":"… 6 In particular, we focused on the ad-hoc retrieval subtask. The data consist of Web pages crawled by means of CommonCrawl, 7 related to the health-related domain. The data collections consider 50 topics/queries and associated documents …","url":["https://link.springer.com/chapter/10.1007/978-3-030-85251-1_9"]} {"year":"2021","title":"A Multifactorial Approach to Crosslinguistic Constituent Orderings","authors":["Z Liu"],"snippet":"… The data for training these LMs was taken from the raw data of the CoNLL 2017 Shared Task on multilingual parsing (Ginter et al. 2017), which contains texts from Common Crawl and Wikipedia. The architecture of the LM was the same for every language …","url":["https://www.researchgate.net/profile/Zoey-Liu/publication/354204297_A_Multifactorial_Approach_to_Crosslinguistic_Constituent_Orderings/links/612c0095c69a4e487967c628/A-Multifactorial-Approach-to-Crosslinguistic-Constituent-Orderings.pdf"]} {"year":"2021","title":"A Multitask Framework to Detect Depression, Sentiment and Multi-label Emotion from Suicide Notes","authors":["S Ghosh, A Ekbal, P Bhattacharyya - Cognitive Computation, 2021"],"snippet":"The significant rise in suicides is a major cause of concern in public health domain. Depression plays a major role in increasing suicide ideation among th.","url":["https://link.springer.com/article/10.1007/s12559-021-09828-7"]} {"year":"2021","title":"A Novel Corpus of Discourse Structure in Humans and Computers","authors":["B Hemmatian, S Feucht, R Avram, A Wey, M Garg… - arXiv preprint arXiv …, 2021"],"snippet":"We present a novel corpus of 445 humanand computer-generated documents, comprising about 27,000 clauses, annotated for semantic clause types and coherence relations that allow for nuanced comparison of artificial and natural …","url":["https://arxiv.org/pdf/2111.05940"]} {"year":"2021","title":"A novel fusion-based deep learning model for sentiment analysis of COVID-19 tweets","authors":["ME Basiri, S Nemati, M Abdar, S Asadi, UR Acharrya - Knowledge-Based Systems, 2021"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0950705121005049"]} {"year":"2021","title":"A NOVEL TRILINGUAL DATASET FOR CRISIS NEWS CATEGORIZATION ACROSS LANGUAGES","authors":["K Kajava - 2021"],"snippet":"… Use Common Crawl instead • “the goal of democratizing access to web information by producing and maintaining an open repository of web crawl data that is universally accessible and analyzable” (https://commoncrawl.org/about/, accessed June 1 2021) …","url":["https://blogs.helsinki.fi/language-technology/files/2021/06/LT-Seminar-2021-06-03-Kaisla-Kajava.pdf"]} {"year":"2021","title":"A Primer on Pretrained Multilingual Language Models","authors":["S Doddapaneni, G Ramesh, A Kunchukuttan, P Kumar… - arXiv preprint arXiv …, 2021"],"snippet":"… and they differ in the architecture (eg, number of layers, parameters, etc), objective functions used for training (eg, monolingual masked language modeling objective, translation language modeling objective, etc), data used …","url":["https://arxiv.org/pdf/2107.00676"]} {"year":"2021","title":"A Probing Task on Linguistic Properties of Korean Sentence Embedding","authors":["A Ahn, BI Ko, D Lee, G Han, M Shin, J Nam - Annual Conference on Human and …, 2021"],"snippet":"Abstract 본 연구는 한국어 문장 임베딩 (embedding) 에 담겨진 언어적 속성을 평가 하기 위한 프로빙 태스크 (Probing Task) 를 소개한다. 프로빙 태스크는 임베딩 으로부터 문장의 표층적, 통사적, 의미적 속성을 구분하는 문제로 영어, 폴란드어 …","url":["https://www.koreascience.or.kr/article/CFKO202130060614813.pdf"]} {"year":"2021","title":"A Residual Network Architecture for Hindi NER using Fasttext and BERT embedding layers","authors":["R Shelke, S Vanjale"],"snippet":"… It provides word embedding for Hindi (and 157 other languages) and is based on the CBOW (Continuous Bag-of-Words) model. The CBOW model learns by predicting the current word based on its context, and it was trained on Common Crawl and Wikipedia …","url":["https://www.novyimir.net/gallery/nmrj%202867f.pdf"]} {"year":"2021","title":"A Review of Bangla Natural Language Processing Tasks and the Utility of Transformer Models","authors":["F Alam, A Hasan, T Alam, A Khan, J Tajrin, N Khan… - arXiv preprint arXiv …, 2021"],"snippet":"Page 1. A Review of Bangla Natural Language Processing Tasks and the Utility of Transformer Models FIROJ ALAM, Qatar Computing Research Institute, HBKU, Qatar MD. ARID HASAN, Cognitive Insight Limited, Bangladesh …","url":["https://arxiv.org/pdf/2107.03844"]} {"year":"2021","title":"A Review of Public Datasets in Question Answering Research","authors":["BB Cambazoglu, M Sanderson, F Scholer, B Croft"],"snippet":"… matching the question. The sentences are selected based on their tf-idf similarity to the question. The underlying web page collection contains pages from the July 2018 archive of the Common Crawl web repository. Task. Given a …","url":["http://www.sigir.org/wp-content/uploads/2020/12/p07.pdf"]} {"year":"2021","title":"A Semi-supervised Multi-task Learning Approach to Classify Customer Contact Intents","authors":["L Dong, MC Spencer, A Biagi"],"snippet":"… We note that this ALBERT model is trained as a multiclass classification with only positive cases. 3.2.2 SS MT D/TAPT ALBERT The pretrained language models are mostly trained on well-known corpora, such as Wikipedia, Common Crawl, BookCorpus, Reddit, etc …","url":["https://assets.amazon.science/79/22/d9237534448293405083a73b896d/a-semi-supervised-multi-task-learning-approach-to-classify-customer-contact-intents.pdf"]} {"year":"2021","title":"A Short Survey of LSTM Models for De-identification of Medical Free Text","authors":["JL Leevy, TM Khoshgoftaar - 2020 IEEE 6th International Conference on …, 2020"],"snippet":"… The training set was obtained from the 2014 i2b2 challenge, while the test set came from the University of Florida (UF) Health Integrated Data Repository 5. Word embeddings were sourced from GoogleNews [54] …","url":["https://ieeexplore.ieee.org/abstract/document/9319017/"]} {"year":"2021","title":"A Simple Post-Processing Technique for Improving Readability Assessment of Texts using Word Mover's Distance","authors":["JM Imperial, E Ong - arXiv preprint arXiv:2103.07277, 2021"],"snippet":"… technique described in Section 4. For the word embeddings of English, German, and Filipino needed for the technique, we downloaded the resources from the fastText website5. The word embeddings in various …","url":["https://arxiv.org/pdf/2103.07277"]} {"year":"2021","title":"A Simple Recipe for Multilingual Grammatical Error Correction","authors":["S Rothe, J Mallinson, E Malmi, S Krause, A Severyn - arXiv preprint arXiv:2106.03830, 2021"],"snippet":"… 2.1 mT5 Pre-training mT5 has been pre-trained on mC4 corpus, a subset of Common Crawl, covering 101 languages and composed of about 50 billion documents. For details on mC4, we refer the reader to the original paper (Xue et al., 2020) …","url":["https://arxiv.org/pdf/2106.03830"]} {"year":"2021","title":"A Spontaneous Stereotype Content Model: Taxonomy, Properties, and Prediction.","authors":["G Nicolas, X Bai, ST Fiske"],"snippet":"… model trained on the Common Crawl (600 billion words obtained from various internet sources) … the Common Crawl (600 billion words), a Glove model trained using around 840 billion words from the Common Crawl (Pennington, Socher, & Manning, 2014; …","url":["https://www.nicolaslab.org/publication/sscm/SSCM.pdf"]} {"year":"2021","title":"A Study of Analogical Density in Various Corpora at Various Granularity","authors":["R Fam, Y Lepage - Information, 2021"],"snippet":"In this paper, we inspect the theoretical problem of counting the number of analogies between sentences contained in a text. Based on this, we measure the analogical density of the text. We focus on analogy at the sentence level …","url":["https://www.mdpi.com/2078-2489/12/8/314/pdf"]} {"year":"2021","title":"A Study of Analogical Density in Various Corpora at Various Granularity. Information 2021, 12, 314","authors":["R Fam, Y Lepage - 2021"],"snippet":"… Table 4 shows the statistics of Multi30K corpus. • CommonCrawl (available at: commoncrawl.org accessed on 20 September 2020) is a crawled web archive and dataset … Table 5 shows the statistics on the CommonCrawl corpus …","url":["https://search.proquest.com/openview/208b192bc36d7c71728c73989c304dea/1?pq-origsite=gscholar&cbl=2032384"]} {"year":"2021","title":"A study on performance improvement considering the balance between corpus in Neural Machine Translation","authors":["C Park, K Park, H Moon, S Eo, H Lim - Journal of the Korea Convergence Society, 2021"],"snippet":"… 1. Concept of Corpus Weight Balance GPT3도 Common Crawl, WebText2, Books1, Books2, Wikipedia 등의 데이터를 합쳐 모델을 훈련하 게 된다. 그러나 말뭉치 간의 특성 및 특징(어투, 문체, 도메인 등)이 다름에도 하나의 데이터로 …","url":["https://www.koreascience.or.kr/article/JAKO202116954598769.pdf"]} {"year":"2021","title":"A Survey of COVID-19 Misinformation: Datasets, Detection Techniques and Open Issues","authors":["AR Ullah, A Das, A Das, MA Kabir, K Shu - arXiv preprint arXiv:2110.00737, 2021"],"snippet":"Page 1. A Survey of COVID-19 Misinformation: Datasets, Detection Techniques and Open Issues AR Sana Ullaha, Anupam Dasa, Anik Dasb, Muhammad Ashad Kabirc,∗, Kai Shud aDepartment of Computer Science and Engineering …","url":["https://arxiv.org/pdf/2110.00737"]} {"year":"2021","title":"A Survey of Machine Learning-Based Solutions for Phishing Website Detection","authors":["L Tang, QH Mahmoud - Machine Learning and Knowledge Extraction, 2021"],"snippet":"With the development of the Internet, network security has aroused people's attention. It can be said that a secure network environment is a basis for the rapid and sound development of the Internet. Phishing is an essential class …","url":["https://www.mdpi.com/2504-4990/3/3/34/pdf"]} {"year":"2021","title":"A Survey of Recent Abstract Summarization Techniques","authors":["D Puspitaningrum - Proceedings of Sixth International Congress on …, 2021"],"snippet":"… For C4, taken from Common Crawl scrape from April 2019 and applied some cleansing filters, it results in a very clean 750GB text dataset of large pre-training datasets, more extensive than other pre-training datasets. 3.2 Pegasus-XSum (Pegasus) …","url":["https://hal.archives-ouvertes.fr/hal-03216381/document"]} {"year":"2021","title":"A Survey on Bias in Deep NLP","authors":["I Garrido-Muñoz, A Montejo-Ráez, F Martínez-Santiago… - 2021"],"snippet":"… 2016 [26] Gender Word2Vec, GloVe GoogleNews corpus (w2vNEWS), Common Crawl English Analogies/Cosine Similarity Vector Space Manipulation After - 2017 [10] Gender, Ethnicity GloVe, Word2Vec Common …","url":["https://www.preprints.org/manuscript/202103.0049/download/final_file"]} {"year":"2021","title":"A Survey on Data Augmentation for Text Classification","authors":["M Bayer, MA Kaufhold, C Reuter - arXiv preprint arXiv:2107.03158, 2021"],"snippet":"… CNN+LSTM/GRU HON RSN-1 RSN-2 Word2Vec Hate Speech FastText Wikipedia GoogleNews W2V GloVe Common Crawl GloVe Common Crawl GloVe Common Crawl -22.7 (Macro F1) +1.0 -3.3 +0.3 -0.2 0 [44] 1. Method …","url":["https://arxiv.org/pdf/2107.03158"]} {"year":"2021","title":"A Survey on Low-Resource Neural Machine Translation","authors":["R Wang, X Tan, R Luo, T Qin, TY Liu - arXiv preprint arXiv:2107.04239, 2021"],"snippet":"Page 1. A Survey on Low-Resource Neural Machine Translation Rui Wang, Xu Tan, Renqian Luo, Tao Qin and Tie-Yan Liu Microsoft Research Asia {ruiwa, xuta, t-reluo, taoqin, tyliu}@microsoft.com Abstract Neural approaches …","url":["https://arxiv.org/pdf/2107.04239"]} {"year":"2021","title":"A Survey On Neural Word Embeddings","authors":["E Sezerer, S Tekir - arXiv preprint arXiv:2110.01804, 2021"],"snippet":"… ivLBL/vLBL [95] 2013 100-600 Wiki LBL Performance - NCE [47] GloVe [109] 2014 300 Wiki, Gigaword, Commoncrawl LBL+coocurence Matrix Training - - DEPS [69] 2014 300 Wiki CBOW Training Stanford tagger[129] …","url":["https://arxiv.org/pdf/2110.01804"]} {"year":"2021","title":"A Survey on Statistical Approaches for Abstractive Summarization of Low Resource Language Documents","authors":["P Deshpande, S Jahirabadkar - Smart Trends in Computing and Communications, 2022"],"snippet":"… German Wiki data is used as real data and synthetic data is a common crawl data. Synthetic data is used to increase size of data. Three settings are considered for generation of summaries: (1) Transformer model using real data for training. …","url":["https://link.springer.com/chapter/10.1007/978-981-16-4016-2_69"]} {"year":"2021","title":"A Synthetic FACS Framework for Expanding Facial Expression Lexicons","authors":["C Butler - 2021"],"snippet":"Page 1. A Synthetic FACS Framework for Expanding Facial Expression Lexicons DISSERTATION Submitted in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY (Computer Science) at the …","url":["https://search.proquest.com/openview/ebf5a87df8275e6fc6fac0b1c0b21b44/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2021","title":"A system for proactive risk assessment of application changes in cloud operations","authors":["R Batta, L Shwartz, M Nidd, AP Azad, H Kumar - 2021 IEEE 14th International …, 2021"],"snippet":"Abstract Change is one of the biggest contributors to service outages. With more enterprises migrating their applications to cloud and using automated build and deployment the volume and rate of changes has significantly increased. Furthermore …","url":["https://www.computer.org/csdl/proceedings-article/cloud/2021/006000a112/1ymJ4TXNxUA"]} {"year":"2021","title":"A Systematic Investigation of Commonsense Understanding in Large Language Models","authors":["XL Li, A Kuncoro, CM d'Autume, P Blunsom… - arXiv preprint arXiv …, 2021"],"snippet":"… 2019), we train our models using the cleaned version of Common Crawl corpus (C4), around 800 GB of data. Our largest model, with 32 transformer layers and 7 billion parameters, has a similar number of parameters to the open-sourced GPT-J model (Wang …","url":["https://arxiv.org/pdf/2111.00607"]} {"year":"2021","title":"A systems-wide understanding of the human olfactory percept chemical space","authors":["J Kowalewski, B Huynh, A Ray - Chemical Senses, 2021"],"snippet":"… 2015; spaCy, 2016), and a convolutional neural network previously trained on GloVe Common Crawl (Pennington, Socher, & Manning, 2014) and OntoNotes 5. The training set is comprised of more than 1 million English …","url":["https://academic.oup.com/chemse/advance-article-abstract/doi/10.1093/chemse/bjab007/6153471"]} {"year":"2021","title":"A Targeted Attack on Black-Box Neural Machine Translation with Parallel Data Poisoning","authors":["C Xu, J Wang, Y Tang, F Guzmán, BIP Rubinstein… - Proceedings of the Web …, 2021"],"snippet":"… 3https://commoncrawl.org/ 4We assume that these poisoned web pages are archived and to be used for parallel data extraction. This assumption is realistic as we find that the crawling services commonly used for parallel …","url":["https://dl.acm.org/doi/abs/10.1145/3442381.3450034"]} {"year":"2021","title":"A unified approach to sentence segmentation of punctuated text in many languages","authors":["R Wicks, M Post"],"snippet":"Page 1. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 3995–4007 August 1–6, 2021 …","url":["https://aclanthology.org/2021.acl-long.309.pdf"]} {"year":"2021","title":"A word embedding-based approach to cross-lingual topic modeling","authors":["CH Chang, SY Hwang - Knowledge and Information Systems, 2021"],"snippet":"The cross-lingual topic analysis aims at extracting latent topics from corpora of different languages. Early approaches rely on high-cost multilingual reso.","url":["https://link.springer.com/article/10.1007/s10115-021-01555-7"]} {"year":"2021","title":"ABC: Attention with Bounded-memory Control","authors":["H Peng, J Kasai, N Pappas, D Yogatama, Z Wu, L Kong… - arXiv preprint arXiv …, 2021"],"snippet":"Page 1. Under review as a conference paper at ICLR 2022 ABC: ATTENTION WITH BOUNDED-MEMORY CONTROL Hao Peng♠ Jungo Kasai♠ Nikolaos Pappas♠ Dani Yogatama♣ Zhaofeng Wu♦∗ Lingpeng Kong♦ Roy Schwartz …","url":["https://arxiv.org/pdf/2110.02488"]} {"year":"2021","title":"Abuse is Contextual, What about NLP? The Role of Context in Abusive Language Annotation and Detection","authors":["S Menini, AP Aprosio, S Tonelli - arXiv preprint arXiv:2103.14916, 2021"],"snippet":"… In particular, all vectors are extracted starting from the pre-trained embeddings obtained from the Common Crawl corpus.5 Since SVM takes in input sentence embeddings, we convert the context and the current tweet …","url":["https://arxiv.org/pdf/2103.14916"]} {"year":"2021","title":"Accelerated execution via eager-release of dependencies in task-based workflows","authors":["H Elshazly, F Lordan, J Ejarque, RM Badia - The International Journal of High …, 2021"],"snippet":"Task-based programming models offer a flexible way to express the unstructured parallelism patterns of nowadays complex applications. This expressive capability is required to achieve maximum possi...","url":["https://journals.sagepub.com/doi/abs/10.1177/1094342021997558"]} {"year":"2021","title":"Accelerating Text Communication via Abbreviated Sentence Input","authors":["J Adhikary, J Berger, K Vertanen"],"snippet":"… For our out-of-domain training set, we used one billion words of web text from Common Crawl1. We only … As shown in Table 1, random sentences from Common Crawl averaged 30 words. The cross-entropy 1https://commoncrawl …","url":["https://aclanthology.org/2021.acl-long.514.pdf"]} {"year":"2021","title":"Accurate Word Representations with Universal Visual Guidance","authors":["Z Zhang, H Yu, H Zhao, R Wang, M Utiyama - arXiv preprint arXiv:2012.15086, 2020"],"snippet":"… WMT'14 EN-DE 4.43M bilingual sentence pairs of the WMT14 dataset were used as training data, including Common Crawl, News Commentary, and Europarl v7. The newstest2013 and newstest2014 datasets were …","url":["https://arxiv.org/pdf/2012.15086"]} {"year":"2021","title":"Acquiring and Harnessing Verb Knowledge for Multilingual Natural Language Processing","authors":["O Majewska - 2021"],"snippet":"Advances in representation learning have enabled natural language processing models to derive non-negligible linguistic information directly from text corpora in an unsupervised fashion. However, this signal is underused in downstream tasks …","url":["https://www.repository.cam.ac.uk/bitstream/handle/1810/329292/Majewska_PhDThesis_final.pdf?sequence=4"]} {"year":"2021","title":"Active Learning for Argument Mining: A Practical Approach","authors":["N Solmsdorf, D Trautmann, H Schütze - arXiv preprint arXiv:2109.13611, 2021"],"snippet":"… easily discernible argumentative statements. The corpus contains 1,000 sentences per topic, ie, in total 8,000 instances, which were tapped from a Common Crawl snapshot and in- dexed with Elasticsearch. The time-consuming …","url":["https://arxiv.org/pdf/2109.13611"]} {"year":"2021","title":"Adapting Neural Machine Translation for Automatic Post-Editing","authors":["A Sharma, P Gupta, A Nelakanti"],"snippet":"… reference as the output. 3.2 Pre-training on domain-specific data FAIR's WMT'19 NMT model was trained on Newscrawl and Commoncrawl datasets while the source of this year's APE data is Wikipedia. To fix the domain mismatch …","url":["https://assets.amazon.science/dc/df/5443c00541a9b6257f6110c5bb86/adapting-neural-machine-translation-for-automatic-post-editing.pdf"]} {"year":"2021","title":"Adaptive Ranking Relevant Source Files for Bug Reports Using Genetic Algorithm","authors":["H Fujita, H Perez-Meana - 2021"],"snippet":"Abstract. Precisely locating buggy files for a given bug report is a cumbersome and time-consuming task, particularly in a large-scale project with thousands of source files and bug reports. An efficient bug localization module is desirable to improve the …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=GYxJEAAAQBAJ&oi=fnd&pg=PA430&dq=commoncrawl&ots=IRy65Bdasc&sig=WyEQAu3sL155gzwLW6pjIX4mCwQ"]} {"year":"2021","title":"ADEPT: An Adjective-Dependent Plausibility Task","authors":["A Emami, I Porada, A Olteanu, K Suleman, A Trischler…"],"snippet":"… 4 Dataset To construct ADEPT, we scrape text samples from English Wikipedia and Common Crawl, extracting adjectival modifier-noun pairs that occur with high frequency … We extracted 10 million pairs from English …","url":["https://aclanthology.org/2021.acl-long.553.pdf"]} {"year":"2021","title":"ADPBC: Arabic Dependency Parsing Based Corpora for Information Extraction","authors":["S Mohamed, M Hussien, HM Mousa - 2021"],"snippet":"… Sch. Econ. Res. Pap. No. WP BRP., 2018. [27] A. Panchenko, E. Ruppert, S. Faralli, SP Ponzetto, and C. Biemann, “Building a web-scale dependency-parsed corpus from common crawl,” Lr. 2018 - 11th Int. Conf. Lang. Resour. Eval., pp. 1816–1823, 2019 …","url":["http://www.mecs-press.org/ijitcs/ijitcs-v13-n1/IJITCS-V13-N1-4.pdf"]} {"year":"2021","title":"Advances and Trends in Artificial Intelligence. From Theory to Practice: 34th International Conference on Industrial, Engineering and Other Applications of Applied …","authors":["H Fujita"],"snippet":"Page 1. Hamido Fujita Ali Selamat Jerry Chun-Wei Lin Moonis Ali (Eds.) Advances and Trends in Artificial Intelligence From Theory to Practice 34th International Conference on Industrial, Engineering and Other Applications …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=ihg5EAAAQBAJ&oi=fnd&pg=PR5&dq=commoncrawl&ots=fSoeFLOj--&sig=NR8xQHWDWjlGUhvSgV52wEaaT6Y"]} {"year":"2021","title":"Aggressive and Offensive Language Identification in Hindi, Bangla, and English: A Comparative Study","authors":["R Kumar, B Lahiri, AK Ojha - SN Computer Science, 2021"],"snippet":"In the present paper, we carry out a comparative study between offensive and aggressive language and attempt to understand their inter-relationship. To car.","url":["https://link.springer.com/article/10.1007/s42979-020-00414-6"]} {"year":"2021","title":"AlephBERT: A Hebrew Large Pre-Trained Language Model to Start-off your Hebrew NLP Application With","authors":["A Seker, E Bandel, D Bareket, I Brusilovsky… - arXiv preprint arXiv …, 2021"],"snippet":"… Oscar: A deduplicated Hebrew portion of the OSCAR corpus, which is “extracted from Common Crawl via language classification, filtering and cleaning” (Ortiz Suárez et al., 2020). • Twitter: Texts of Hebrew tweets collected between 2014-09-28 and 2018-03-07 …","url":["https://arxiv.org/pdf/2104.04052"]} {"year":"2021","title":"Alignment of Language Agents","authors":["Z Kenton, T Everitt, L Weidinger, I Gabriel, V Mikulik… - arXiv preprint arXiv …, 2021","ZKTEL Weidinger, IGVMG Irving"],"snippet":"… Large scale unlabeled datasets are collected from the web, such as the CommonCrawl dataset (Raffel et al., 2019). Input data and labels are created by chopping a sentence into … Brown et al. (2020) attempt to im- prove …","url":["https://ar5iv.labs.arxiv.org/html/2103.14659","https://arxiv.org/pdf/2103.14659"]} {"year":"2021","title":"All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training","authors":["I Nassar, S Herath, E Abbasnejad, W Buntine, G Haffari - arXiv preprint arXiv …, 2021"],"snippet":"… A detailed description of such relations and examples thereof can be found in the ConceptNet documentation8. On the other hand, GloVe and word2vec are two prominent sets of word embeddings, the former is trained on 840 …","url":["https://arxiv.org/pdf/2104.05248"]} {"year":"2021","title":"All NLP Tasks Are Generation Tasks: A General Pretraining Framework","authors":["Z Du, Y Qian, X Liu, M Ding, J Qiu, Z Yang, J Tang - arXiv preprint arXiv:2103.10360, 2021"],"snippet":"Page 1. All NLP Tasks Are Generation Tasks: A General Pretraining Framework Zhengxiao Du *12 Yujie Qian * 3 Xiao Liu 1 2 Ming Ding 1 2 Jiezhong Qiu 1 2 Zhilin Yang 4 2 Jie Tang 1 2 Abstract There have been various types …","url":["https://arxiv.org/pdf/2103.10360"]} {"year":"2021","title":"Allocating Large Vocabulary Capacity for Cross-lingual Language Model Pre-training","authors":["B Zheng, L Dong, S Huang, S Singhal, W Che, T Liu… - arXiv preprint arXiv …, 2021"],"snippet":"… are learned on the reconstructed CommonCrawl corpus (Chi et al., 2021b; Conneau et al., 2020) using SentencePiece (Kudo and Richardson, 2018) with the unigram language model (Kudo, 2018). The unigram distributions …","url":["https://arxiv.org/pdf/2109.07306"]} {"year":"2021","title":"ALX: Large Scale Matrix Factorization on TPUs","authors":["H Mehta, S Rendle, W Krichene, L Zhang - arXiv preprint arXiv:2112.02194, 2021"],"snippet":"We present ALX, an open-source library for distributed matrix factorization using Alternating Least Squares, written in JAX. Our design allows for efficient use of the TPU architecture and scales well to matrix factorization problems of O(B) rows/columns …","url":["https://arxiv.org/pdf/2112.02194"]} {"year":"2021","title":"AMMUS: A Survey of Transformer-based Pretrained Models in Natural Language Processing","authors":["KS Kalyan, A Rajasekharan, S Sangeetha - arXiv preprint arXiv:2108.05542, 2021"],"snippet":"… mT6 [91], XLM-E [89] CC-Aligned [108] Parallel corpus of 292 million non-English common crawl document pairs and 100 million English common crawl document pairs. XLM-E [89] Dakshina [109] Parallel corpus containing 10K sentences for 12 In- dian languages …","url":["https://arxiv.org/pdf/2108.05542"]} {"year":"2021","title":"An Alignment-Based Approach to Semi-Supervised Bilingual Lexicon Induction with Small Parallel Corpora","authors":["K Marchisio, C Xiong, P Koehn"],"snippet":"… learning. 6 Experimental Settings Language Corpus # of words English WaCky, BNC, Wikipedia 2.8 B Italian itWac 1.6 B German SdeWaC 0.9 B Spanish News Crawl 2007-2012 386 M Finnish Common Crawl 2016 2.8 B Table …","url":["https://aclanthology.org/2021.mtsummit-research.24.pdf"]} {"year":"2021","title":"An analysis of full-size Russian complexly NER labelled corpus of Internet user reviews on the drugs based on deep learning and language neuron nets","authors":["AG Sboeva, SG Sboevac, IA Moloshnikova…"],"snippet":"Page 1. An analysis of full-size Russian complexly NER labelled corpus of Internet user reviews on the drugs based on deep learning and language neuron nets AG Sboeva,b,, SG Sboevac, IA Moloshnikova, AV Gryaznova, RB …","url":["https://sagteam.ru/papers/med-corpus/4.pdf"]} {"year":"2021","title":"An Effective Deep Learning Approach for Extractive Text Summarization","authors":["MT Luu, TH Le, MT Hoang"],"snippet":"Page 1. An Effective Deep Learning Approach for Extractive Text Summarization Minh-Tuan Luu PhD. Student, School of Information and Communication Technology, Hanoi University of Science and Technology, No.1 Dai Co …","url":["http://www.ijcse.com/docs/INDJCSE21-12-02-141.pdf"]} {"year":"2021","title":"An embedding method for unseen words considering contextual information and morphological information","authors":["MS Won, YS Choi, S Kim, CW Na, JH Lee - Proceedings of the 36th Annual ACM …, 2021"],"snippet":"… Random embeddings are assigned for OOv words. Glove is implemented by using preǦtrained 300Ǧdimensional Glove embedding, which is trained on Common Crawl with 840B word tokens. Random embeddings are assigned for OOv words …","url":["https://dl.acm.org/doi/abs/10.1145/3412841.3441982"]} {"year":"2021","title":"An empirical evaluation of text representation schemes to filter the social media stream","authors":["S Modha, P Majumder, T Mandl - Journal of Experimental & Theoretical Artificial …, 2021"],"snippet":"… Glove pre-trained model available with different embed size and trained on the common crawl, Twitter. We have used the Glove pre-trained model with a vocabulary size of 2.2 million and trained on the common crawl. fastText …","url":["https://www.tandfonline.com/doi/full/10.1080/0952813X.2021.1907792"]} {"year":"2021","title":"An Empirical Exploration in Quality Filtering of Text Data","authors":["L Gao - arXiv preprint arXiv:2109.00698, 2021"],"snippet":"… (2020), with a Paretodistribution thresholded filtering method and a shallow CommonCrawl-WebText classfier … (2020) has been made public, we instead use the same type of fasttext (Joulin et al., 2017) classifier between unfiltered …","url":["https://arxiv.org/pdf/2109.00698"]} {"year":"2021","title":"An Empirical Study on Task-Oriented Dialogue Translation","authors":["S Liu - ICASSP 2021-2021 IEEE International Conference on …, 2021"],"snippet":"… consistent). We valid them with SENT-BASE model on En⇒De task. data in WMT20 news domain, which consists of CommonCrawl and NewsCommentary. We conduct data selection to select similar amount of sentences …","url":["https://ieeexplore.ieee.org/abstract/document/9413521/"]} {"year":"2021","title":"An End-to-end Point of Interest (POI) Conflation Framework","authors":["R Low, ZD Tekler, L Cheah - arXiv preprint arXiv:2109.06073, 2021"],"snippet":"… words that did not appear in the training data [63]. For this study, the fastText model was pre-trained on 2 million word vectors with subword information from commoncrawl.org. The second advantage of using the fastText library to …","url":["https://arxiv.org/pdf/2109.06073"]} {"year":"2021","title":"An evaluation dataset for depression detection in Arabic social media","authors":["S Elimam, M Bougeussa - International Journal of Knowledge Engineering and …, 2021"],"snippet":"Studying depression in Arabic social media has been neglected compared to other languages and the traditional way of dealing with depression (face-to-face medical diagnose) is not enough as the number of people that suffer from depression in …","url":["https://www.inderscienceonline.com/doi/abs/10.1504/IJKEDM.2021.119888"]} {"year":"2021","title":"An Explainable Multi-Modal Hierarchical Attention Model for Developing Phishing Threat Intelligence","authors":["Y Chai, Y Zhou, W Li, Y Jiang - IEEE Transactions on Dependable and Secure …, 2021"],"snippet":"Phishing website attack, as one of the most persistent forms of cyber threats, evolves and remains a major cyber threat. Various detection methods (eg, lookup systems, fraud cue-based methods) have been proposed to identify phishing websites. The …","url":["https://ieeexplore.ieee.org/abstract/document/9568704/"]} {"year":"2021","title":"An Exploration of Alignment Concepts to Bridge the Gap between Phrase-based and Neural Machine Translation","authors":["JT Peter"],"snippet":"Page 1. An Exploration of Alignment Concepts to Bridge the Gap between Phrase-based and Neural Machine Translation Von der Fakultät für Mathematik, Informatik und Naturwissenschaften der RWTH Aachen University zur …","url":["https://www-i6.informatik.rwth-aachen.de/publications/download/1175/PeterJan-Thorsten--ExplorationofAlignmentConceptstoBridgetheGapbetweenPhrase-basedNeuralMachineTranslation--2020.pdf"]} {"year":"2021","title":"An Exploratory Analysis of Multilingual Word-Level Quality Estimation with Cross-Lingual Transformers","authors":["T Ranasinghe, C Orasan, R Mitkov - arXiv preprint arXiv:2106.00143, 2021"],"snippet":"… Our architecture relies on the XLM-R transformer model (Conneau et al., 2020) to derive the representations of the input sentences. XLM-R has been trained on a large-scale multilingual dataset in 104 languages, totalling …","url":["https://arxiv.org/pdf/2106.00143"]} {"year":"2021","title":"An Exploratory Study on Utilising the Web of Linked Data for Product Data Mining","authors":["Z Zhang, X Song - arXiv preprint arXiv:2109.01411, 2021"],"snippet":"… The Web Data Commons3 (WDC) project extracts such structured data from the CommonCrawl4 as RDF n-quads5, and release them on … to create a very large training dataset for product entity linking using semantic markup data …","url":["https://arxiv.org/pdf/2109.01411"]} {"year":"2021","title":"An extended analysis of the persistence of persistent identifiers of the scholarly web","authors":["M Klein, L Balakireva - International Journal on Digital Libraries, 2021"],"snippet":"… These findings were confirmed in a large-scale study by Thompson and Jian [22] based on two samples of the web taken from Common Crawl Footnote 3 datasets. The authors were motivated to quantify the use of HTTP DOIs versus URLs of …","url":["https://link.springer.com/article/10.1007/s00799-021-00315-w"]} {"year":"2021","title":"An Intrinsic and Extrinsic Evaluation of Learned COVID-19 Concepts using Open-Source Word Embedding Sources","authors":["S Parikh, A Davoudi, S Yu, C Giraldo, E Schriver… - medRxiv"],"snippet":"… 8] and GloVe [9] on large corpora of texts including domain-independent texts (eg, internet web pages like Wikipedia and CommonCrawl; social media … Standard GloVe Embeddings Paper Vectors [9] Common Crawl Token 10 …","url":["https://www.medrxiv.org/content/medrxiv/early/2021/01/04/2020.12.29.20249005.full.pdf"]} {"year":"2021","title":"An Investigation towards Differentially Private Sequence Tagging in a Federated Framework","authors":["A Jana, C Biemann"],"snippet":"… 2The hyperparameter settings to train those models are as follows: epochs- 10, batch size - 32, learning rate - 0.15, optimizer - Stochastic gradient descent (SGD) Common Crawl corpus) from spaCy library3, the dimension of which is 300 …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2021-janabiemann-privnlp-fed.pdf"]} {"year":"2021","title":"An Overview on Evaluation Labs and Open Issues in Health-related Credible Information Retrieval","authors":["R Upadhyay, G Pasi, M Viviani - 2021"],"snippet":"… The 2020 Track used a dataset provided by Common Crawl, in particular related to different news collected in the first four months of 2020.4 On … 2https://trec-health-misinfo.github.io/2019.html 3https://lemurproject.org …","url":["http://52.178.216.184/paper31.pdf"]} {"year":"2021","title":"Analysis and Evaluation of Language Models for Word Sense Disambiguation","authors":["D Loureiro, K Rezaee, MT Pilehvar… - Computational Linguistics, 2021"],"snippet":"Page 1. Analysis and Evaluation of Language Models for Word Sense Disambiguation Daniel Loureiro∗ LIAAD - INESC TEC Department of Computer Science - FCUP University of Porto, Portugal dloureiro@fc.up.pt Kiamehr …","url":["https://direct.mit.edu/coli/article-pdf/doi/10.1162/coli_a_00405/1900170/coli_a_00405.pdf"]} {"year":"2021","title":"Analysis of Machine Learning and Deep Learning Frameworks for Opinion Mining on Drug Reviews","authors":["F Youbi, N Settouti - The Computer Journal, 2021"],"snippet":"… More precisely, GloVe consists of collecting word co- occurrence statistics in a form of a word co-occurrence matrix, in which its developers have provided pre-embed millions of English tokens obtained from Wikipedia data and common crawl data …","url":["https://academic.oup.com/comjnl/advance-article-abstract/doi/10.1093/comjnl/bxab084/6311550"]} {"year":"2021","title":"Analyzing Hyperonyms of Stack Overflow Posts","authors":["L Tóth, L Vidács"],"snippet":"… They applied a similar lexico-syntactic pattern-based mining on the dataset obtained from CommonCrawl [17] using a slightly different grammar for NP identification and, therefore, a slightly different set of patterns. Despite the differences …","url":["https://www.researchgate.net/profile/Laszlo-Toth-12/publication/356192289_Analyzing_Hyperonyms_of_Stack_Overflow_Posts/links/61910421d7d1af224bea68e9/Analyzing-Hyperonyms-of-Stack-Overflow-Posts.pdf"]} {"year":"2021","title":"Analyzing Multimodal Language via Acoustic-and Visual-LSTM with Channel-aware Temporal Convolution Network","authors":["S Mai, S Xing, H Hu - IEEE/ACM Transactions on Audio, Speech, and …, 2021"],"snippet":"Page 1. 2329-9290 (c) 2021 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/9387606/"]} {"year":"2021","title":"Analyzing the Forgetting Problem in Pretrain-Finetuning of Open-domain Dialogue Response Models","authors":["T He, J Liu, K Cho, M Ott, B Liu, J Glass, F Peng - … of the 16th Conference of the …, 2021"],"snippet":"… 4.1 Datasets For pretraining, we use the large-scale CCNEWS data (Bakhtin et al., 2019) which is a de-duplicated subset of the English portion of the CommonCrawl news dataset1 … We tune the 1 http://commoncrawl.org/2016/10/ news-dataset-available Page 5. 1125 …","url":["https://www.aclweb.org/anthology/2021.eacl-main.95.pdf"]} {"year":"2021","title":"Analyzing transfer learning impact in biomedical cross-lingual named entity recognition and normalization","authors":["RM Rivera-Zavala, P Martínez - BMC Bioinformatics, 2021"],"snippet":"… The FastText-2M [52] pre-trained English word embedding model trained with subword information on Common Crawl using the FastText implementation. Finally, the PubMed and PMC [53] pre-trained English word embedding model, trained on a …","url":["https://bmcbioinformatics.biomedcentral.com/articles/10.1186/s12859-021-04247-9"]} {"year":"2021","title":"Annotation of Fine-Grained Geographical Entities in German Texts","authors":["J Moreno-Schneider, M Plakidis, G Rehm - 3rd Conference on Language, Data and …, 2021"],"snippet":"… The SpaCy models are trained on Ontonotes 5 and Common Crawl (English; en_core_web_md) and WikiNER and TIGER (German; de_core_news_md). The Stanford models are trained on the CoNLL 2003 data [18]. BERT-NER is trained on WikiNER [11] …","url":["https://drops.dagstuhl.de/opus/volltexte/2021/14547/pdf/OASIcs-LDK-2021-11.pdf"]} {"year":"2021","title":"Answering questions about insurance supervision with a Neural Machine Translator","authors":["J Glowienke - 2021"],"snippet":"… Conneau et al. [3] pre-train a model based on the RoBERTa architecture to create crosslingual representations. The XLM-R model is pre-trained on a common crawl dataset of 100 languages using the masked multi-lingual language model ap- proach …","url":["https://dke.maastrichtuniversity.nl/jan.niehues/wp-content/uploads/2021/08/Glowienke-Master-thesis.pdf"]} {"year":"2021","title":"Anticipating Attention: On the Predictability of News Headline Tests","authors":["N Hagar, N Diakopoulos, B DeWilde - Digital Journalism, 2021"],"snippet":"… These embeddings contain 300 dimensions and were trained on English language text from the OntoNotes 5.0 and GloVe Common Crawl corpora. For each headline, we computed the average embedding vector across all tokens. …","url":["https://www.tandfonline.com/doi/abs/10.1080/21670811.2021.1984266"]} {"year":"2021","title":"Applying and Understanding an Advanced, Novel Deep Learning Approach: A Covid 19, Text Based, Emotions Analysis Study","authors":["J Choudrie, S Patil, K Kotecha, N Matta, I Pappas - Information Systems Frontiers, 2021"],"snippet":"The pandemic COVID 19 has altered individuals' daily lives across the globe. It has led to preventive measures such as physical distancing to be impo.","url":["https://link.springer.com/article/10.1007/s10796-021-10152-6"]} {"year":"2021","title":"Applying Deep Learning Techniques for Sentiment Analysis to Assess Sustainable Transport","authors":["A Serna Nocedal, A Soroa Echave, R Agerri Gascón - 2021","A Serna, A Soroa, R Agerri - Sustainability, 2021"],"snippet":"… Thus, the multilingual version of BERT [25] was trained for 104 languages. More recently, XLM-RoBERTa [21] distributes a multilingual model which contains 100 languages trained on 2.5 TB of filtered Common Crawl text. To …","url":["https://addi.ehu.eus/bitstream/handle/10810/50497/sustainability-13-02397-v2.pdf?sequence=1&isAllowed=y","https://www.mdpi.com/2071-1050/13/4/2397/pdf"]} {"year":"2021","title":"Applying Deep Learning Techniques for Sentiment Analysis to Assess Sustainable Transport. Sustainability 2021, 13, 2397","authors":["A Serna, A Soroa, R Agerri - 2021"],"snippet":"… Thus, the multilingual version of BERT [25] was trained for 104 languages. More recently, XLM-RoBERTa [21] distributes a multilingual model which contains 100 languages trained on 2.5 TB of filtered Common Crawl text. To …","url":["https://search.proquest.com/openview/b1ea0637935ea567d5fd68853527c980/1?pq-origsite=gscholar&cbl=2032327"]} {"year":"2021","title":"AR-LSAT: Investigating Analytical Reasoning of Text","authors":["W Zhong, S Wang, D Tang, Z Xu, D Guo, J Wang, J Yin… - arXiv preprint arXiv …, 2021"],"snippet":"Page 1. AR-LSAT: Investigating Analytical Reasoning of Text Wanjun Zhong1∗, Siyuan Wang3∗, Duyu Tang2, Zenan Xu1∗, Daya Guo1∗ Jiahai Wang1, Jian Yin1, Ming Zhou4 and Nan Duan2 1 The School of Data and Computer Science, Sun Yat-sen University …","url":["https://arxiv.org/pdf/2104.06598"]} {"year":"2021","title":"Arabic Offensive Language Detection in Social Media","authors":["F Husain - 2021"],"snippet":"Page 1. ARABIC OFFENSIVE LANGUAGE DETECTION IN SOCIAL MEDIA by Fatemah Ali Husain A Dissertation Submitted to the Graduate Faculty of George Mason University in Partial Fulfillment of The Requirements for the …","url":["https://search.proquest.com/openview/aefe47a620c621b1c7ed7f95196cf6ba/1?pq-origsite=gscholar&cbl=18750&diss=y"]} {"year":"2021","title":"AraCOVID19-SSD: Arabic COVID-19 Sentiment and Sarcasm Detection Dataset","authors":["MS Hadj Ameur - Revue de l'Information Scientifique et Technique, 2023","MSH Ameur, H Aliane - arXiv preprint arXiv:2110.01948, 2021"],"snippet":"… Multilingual BERT (mBERT)6: A BERT-based model [17] pretrained on the first 104 major Wikipedia languages7. • XLM-Roberta 8: A large multi-lingual language model, trained on 2.5TB of filtered Common Crawl data [19]. 4.2.2 Bag-of-Words Models …","url":["https://arxiv.org/pdf/2110.01948","https://www.asjp.cerist.dz/index.php/en/downArticle/134/27/1/220363"]} {"year":"2021","title":"AraStance: A Multi-Country and Multi-Domain Dataset of Arabic Stance Detection for Fact Checking","authors":["T Alhindi, A Alabdulkarim, A Alshehri, M Abdul-Mageed… - arXiv preprint arXiv …, 2021"],"snippet":"… AraStance and Khoja. This indicates the suitability of the pretraining data of ARBERT that includes Books, Gi- gawords and Common Crawl data primarily from MSA but also a small amount of Egyptian Arabic. Since half of …","url":["https://arxiv.org/pdf/2104.13559"]} {"year":"2021","title":"ARBERT & MARBERT: Deep Bidirectional Transformers for Arabic","authors":["M Abdul-Mageed, AR Elmadany, EMB Nagoudi - arXiv preprint arXiv:2101.01785, 2020"],"snippet":"… mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020) use a small Arabic text collection from Wikipedia (153M tokens) and CommonCrawl (2.9B … XLM-R (Conneau et al., 2020) is trained on Common Crawl data, hence …","url":["https://arxiv.org/pdf/2101.01785"]} {"year":"2021","title":"Are Multilingual Models Effective in Code-Switching?","authors":["GI Winata, S Cahyawijaya, Z Liu, Z Lin, A Madotto… - arXiv preprint arXiv …, 2021"],"snippet":"… switching tasks. 2.2.2 XLM-RoBERTa XLM-RoBERTa (XLM-R) (Conneau et al., 2020) is a multilingual language model that is pre-trained on 100 languages using more than two terabytes of filtered CommonCrawl data. Thanks to …","url":["https://arxiv.org/pdf/2103.13309"]} {"year":"2021","title":"Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? A Comprehensive Assessment for Catalan","authors":["J Armengol-Estapé, CP Carrino, C Rodriguez-Penagos… - arXiv preprint arXiv …, 2021"],"snippet":"… Catalan Government; (2) the Catalan Open Subtitles, a collection of translated movie subtitles (Tiedemann, 2012); (3) the non-shuffled version of the Catalan part of the OSCAR corpus (Suárez et al., 2019), a collection …","url":["https://arxiv.org/pdf/2107.07903"]} {"year":"2021","title":"Are You Really Complaining? A Multi-task Framework for Complaint Identification, Emotion, and Sentiment Classification","authors":["A Singh, S Saha - International Conference on Document Analysis and …, 2021"],"snippet":"… For deep learning baseline (MT\\(_{\\mathrm{GloVe}}\\)) we also used pre-trained GloVe 13 [16] word embedding which is trained on Common Crawl (840 billion tokens) corpus to get the word embedding representations. 4.4 Results and Discussion …","url":["https://link.springer.com/chapter/10.1007/978-3-030-86331-9_46"]} {"year":"2021","title":"ARMAN: Pre-training with Semantically Selecting and Reordering of Sentences for Persian Abstractive Summarization","authors":["A Salemi, E Kebriaei, GN Minaei, A Shakery - arXiv preprint arXiv:2109.04098, 2021"],"snippet":"… This corpus contains around 2.8M articles and 1.4B words in all of the articles. CC100 (Conneau et al., 2020; Wenzek et al., 2020) is a monolingual dataset for 100+ languages constructed from Commoncrawl snapshots. This …","url":["https://arxiv.org/pdf/2109.04098"]} {"year":"2021","title":"As Easy as 1, 2, 3: Behavioural Testing of NMT Systems for Numerical Translation","authors":["J Wang, C Xu, F Guzman, A El-Kishky, BIP Rubinstein… - arXiv preprint arXiv …, 2021"],"snippet":"… Our testing framework facilitates constructing test instances for new domains in the following steps: 1. Obtain a large corpus of text that contains numbers (eg,CommonCrawl); 2. Check if there is a number in the output translation; …","url":["https://arxiv.org/pdf/2107.08357"]} {"year":"2021","title":"Aspect-based Sentiment Analysis with Graph Convolution over Syntactic Dependencies","authors":["A Zunic, P Corcoran, I Spasic","A Žunić, P Corcoran, I Spasić - Artificial Intelligence in Medicine, 2021"],"snippet":"… individual sentences into dependency graphs. Individual words representing vertices in such graphs were mapped onto their embeddings, which were pretrained 160 on web data from Common Crawl using the GloVe method [31]. Each input …","url":["https://www.researchgate.net/profile/Irena-Spasic/publication/353775596_Aspect-based_Sentiment_Analysis_with_Graph_Convolution_over_Syntactic_Dependencies/links/61112fae169a1a0103ea3e67/Aspect-based-Sentiment-Analysis-with-Graph-Convolution-over-Syntactic-Dependencies.pdf","https://www.sciencedirect.com/science/article/pii/S0933365721001317"]} {"year":"2021","title":"ASR4REAL: An extended benchmark for speech models","authors":["M Riviere, J Copet, G Synnaeve - arXiv preprint arXiv:2110.08583, 2021"],"snippet":"… even a language model trained on a dataset as big as Common Crawl does not seem to have significant positive effect which reiterates … For all of theses models we used the a 4-gram LM trained on Common Crawl with the decoding parameters …","url":["https://arxiv.org/pdf/2110.08583"]} {"year":"2021","title":"Assessing reasoning and world knowledge of large language models using questionized counterfactual conditionals","authors":["J Frohberg, F Binder - 2021"],"snippet":"Page 1. Assessing reasoning and world knowledge of large language models using questionized counterfactual conditionals Jörg Frohberg apergo UG Leipzig, Germany j.frohberg@apergo.ai Frank Binder Institute for Applied …","url":["https://openreview.net/pdf?id=i9XYDrUJYyP"]} {"year":"2021","title":"Assessing the Extent and Types of Hate Speech in Fringe Communities: A Case Study of Alt-Right Communities on 8chan, 4chan, and Reddit","authors":["D Rieger, AS Kümpel, M Wich, T Kiening, G Groh - Social Media+ Society, 2021"],"snippet":"… For this article, the fastText word vectors pre-trained on the English Common Crawl dataset were used because it is trained on web data and thus an appropriate basis (Mikolov, Grave, Bojanowski, Puhrsch, & Joulin, 2019). …","url":["https://journals.sagepub.com/doi/pdf/10.1177/20563051211052906"]} {"year":"2021","title":"AStitchInLanguageModels: Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models","authors":["HT Madabushi, E Gow-Smith, C Scarton… - arXiv preprint arXiv …, 2021"],"snippet":"Page 1. AStitchInLanguageModels: Dataset and Methods for the Exploration of Idiomaticity in Pre-Trained Language Models Harish Tayyar Madabushi, Edward Gow-Smith, Carolina Scarton and Aline Villavicencio Department …","url":["https://arxiv.org/pdf/2109.04413"]} {"year":"2021","title":"Attention-based model for predicting question relatedness on Stack Overflow","authors":["J Pei, Z Qin, Y Cong, J Guan - arXiv preprint arXiv:2103.10763, 2021"],"snippet":"… released by Stanford [22]. This word embeddings pre-trained in the Common Crawl corpus, which contains a large amount of data irrelevant to software engineering, may lead to ambiguous results [18]. Therefore, we hope that …","url":["https://arxiv.org/pdf/2103.10763"]} {"year":"2021","title":"Attention: there is an inconsistency between android permissions and application metadata!","authors":["H Alecakir, B Can, S Sen - International Journal of Information Security"],"snippet":"Since mobile applications make our lives easier, there is a large number of mobile applications customized for our needs in the application markets. While.","url":["https://link.springer.com/article/10.1007/s10207-020-00536-1"]} {"year":"2021","title":"Attentive Excitation and Aggregation for Bilingual Referring Image Segmentation","authors":["Q Zhou, T Hui, R Wang, H Hu, S Liu - ACM Transactions on Intelligent Systems and …, 2021"],"snippet":"… For English expression, we use GloVe1 pretrained on Common Crawl to embed each word into a 300-d vector. For Chinese expression, existing tools … GloVe word embeddings [34] pretrained on Common Crawl 840B …","url":["https://dl.acm.org/doi/abs/10.1145/3446345"]} {"year":"2021","title":"Augmenting Poetry Composition with Verse by Verse","authors":["D Uthus, M Voitovich, RJ Mical - arXiv preprint arXiv:2103.17205, 2021"],"snippet":"… TextSETTR was shown to yield better results in transforming sentiment while preserving fluency (important aspects for our work). As described in the TextSETTR paper, we use the model that had been fine-tuned on English Common Crawl data …","url":["https://arxiv.org/pdf/2103.17205"]} {"year":"2021","title":"Augmenting semantic lexicons using word embeddings and transfer learning","authors":["T Alshaabi, C Van Oort, M Fudolig, MV Arnold… - arXiv preprint arXiv …, 2021"],"snippet":"… words. We then pass the token embeddings to a 300dimensional embedding layer. We initialize the embedding layer with weights trained with subword information on Common Crawl and Wikipedia using FastText [59]. In …","url":["https://arxiv.org/pdf/2109.09010"]} {"year":"2021","title":"AUGVIC: Exploiting BiText Vicinity for Low-Resource NMT","authors":["T Mohiuddin, MS Bari, S Joty - arXiv preprint arXiv:2106.05141, 2021"],"snippet":"… localization guide, respectively. For some languages, the amount of specific domain monolingual data is limited, where we added additional monolingual data of that language from Common Crawl. Following previous work …","url":["https://arxiv.org/pdf/2106.05141"]} {"year":"2021","title":"Authorship Weightage Algorithm for Academic publications: A new calculation and ACES webserver for determining expertise","authors":["WL Wu, O Tan, KF Chan, NB Ong, D Gunasegaran… - Methods and Protocols, 2021"],"snippet":"… the back-end server. These word vectors were trained on Common Crawl (https://commoncrawl.org (last accessed on 28 April 2021)) using fastText [17], and are used to map the processed query to its corresponding values …","url":["https://www.mdpi.com/2409-9279/4/2/41/pdf"]} {"year":"2021","title":"Automated Change Detection in Privacy Policies","authors":["A Adhikari - 2020"],"snippet":"Page 1. University of Denver Digital Commons @ DU Electronic Theses and Dissertations Graduate Studies 2020 Automated Change Detection in Privacy Policies Andrick Adhikari Follow this and additional works at: https://digitalcommons.du.edu/etd …","url":["https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=2702&context=etd"]} {"year":"2021","title":"Automated essay scoring: A review of the field","authors":["P Lagakis, S Demetriadis - … International Conference on Computer, Information and …, 2021"],"snippet":"… Transformer models make use of those huge datasets of existing general text data, such as Wikipedia Corpus and Common Crawl, to pretrain multilayer neural networks with context-sensitive meaning of, and relations between, words, such as …","url":["https://ieeexplore.ieee.org/abstract/document/9618476/"]} {"year":"2021","title":"Automated Grading of Exam Responses: An Extensive Classification Benchmark","authors":["A Farazouli, Z Lee, P Papapetrou, U Fors - … Science: 24th International Conference, DS 2021 …","J Ljungman, V Lislevand, J Pavlopoulos, A Farazouli… - International Conference on …, 2021"],"snippet":"… This method proves that training BERT with alternative design choices and with more data, including the CommonCrawl News dataset, … training XLM-R on one hundred languages using CommonCrawl data2, in contrast to previous works such …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=IydHEAAAQBAJ&oi=fnd&pg=PA3&dq=commoncrawl&ots=QIe2sENq0_&sig=LQ1NnDlylvNDV4-vNPAiGJEMZd4","https://link.springer.com/chapter/10.1007/978-3-030-88942-5_1"]} {"year":"2021","title":"Automated identification of bias inducing words in news articles using linguistic and context-oriented features","authors":["T Spinde, L Rudnitckaia, J Mitrović, F Hamborg… - Information Processing & …, 2021"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0306457321000157"]} {"year":"2021","title":"Automated methods for Question-Answering in Icelandic","authors":["V Snæbjarnarson"],"snippet":"… The source of the data is the open internet, made accessible to those with relatively modest computing resources and disk storage through the targeted use of the Common Crawl datasets that comprise petabytes of data. Prior work has focused on the …","url":["https://vesteinn.is/thesis_150921.pdf"]} {"year":"2021","title":"Automatic Detection of Fake","authors":["BM Bažík"],"snippet":"… For the data, they created the RealNews dataset, a large corpus of news articles from Common Crawl1. Fake News Detection Using Deep Learning Techniques [11] compared Logistic Regression (LR), Naive Bayes (NB) …","url":["https://is.muni.cz/th/hk1px/Martin_Bazik_master_thesis.pdf"]} {"year":"2021","title":"Automatic Difficulty Classification of Arabic Sentences","authors":["N Khallaf, S Sharoff - arXiv preprint arXiv:2103.04386, 2021"],"snippet":"… corpus (Common Crawl and Wikipedia for ArabicBERT vs Common Crawl XML-R vs Wikipedia for BERT, AraBert and UCS) used to train the Arabic … The corpus will be classified on the ba- sis of how difficult the sentences are …","url":["https://arxiv.org/pdf/2103.04386"]} {"year":"2021","title":"Automatic Fully-Contextualized Recommendation Extraction from Radiology Reports","authors":["J Steinkamp, C Chambers, D Lalevic, T Cook - Journal of Digital Imaging, 2021"],"snippet":"… We evaluated a simple long short-term memory (LSTM) architecture [12] on the task. We used a combination of custom-trained fastText vectors, trained on our institution's entire repository of radiology reports, with Global …","url":["https://link.springer.com/article/10.1007/s10278-021-00423-8"]} {"year":"2021","title":"Automatic Generic Web Information Extraction at Scale","authors":["M Aljabary - 2021"],"snippet":"Page 1. 1 Page 2. 2 Automatic Generic Web Information Extraction at Scale Master Thesis Computer Science, Data Science and Technology University of Twente. Enschede, The Netherlands An attempt to bring some structure …","url":["http://essay.utwente.nl/86153/1/Aljabary_MA_EEMCS.pdf"]} {"year":"2021","title":"Automatic Sexism Detection with Multilingual Transformer Models","authors":["S Mina, B Jaqueline, L Daria, S Djordje, K Armin… - arXiv preprint arXiv …, 2021"],"snippet":"… XLM-R is a multilingual model trained on 100 languages, similar to mBERT. Unlike the latter, XLM-R is not trained on Wikipedia data but on monolingual CommonCrawl data. The model shows improved cross-lingual language …","url":["https://arxiv.org/pdf/2106.04908"]} {"year":"2021","title":"Automatic Stress Detection from Facial Videos","authors":["EM de Oca - 2021"],"snippet":"… , leading to the development of pretrained systems such as BERT(Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which were trained with large language datasets, such as Wikipedia …","url":["https://www.eduardomontesdeoca.com/s/ADSS-Project.pdf"]} {"year":"2021","title":"Automatically Detecting Cyberbullying Comments on Online Game Forums","authors":["HHP Vo, HT Tran, ST Luu - arXiv preprint arXiv:2106.01598, 2021"],"snippet":"… on Wikipedia and Gigaword corpora. The fastText 3 is trained on Common Crawl and Wikipedia datasets using CBOW with position-weight in 300 dimensions with 5-grams features. D. Traditional machine learning models Logistic …","url":["https://arxiv.org/pdf/2106.01598"]} {"year":"2021","title":"Autonomous Writing Futures","authors":["AH Duin, I Pedersen - Writing Futures: Collaborative, Algorithmic …, 2021"],"snippet":"… GPT-3 is 175 billion parameters. GPT-3 is trained on the Common Crawl data set, a corpus of almost a trillion words of texts scraped from the Web. “The dataset and model size are about two orders of magnitude larger than those used for GPT-2,” the authors write …","url":["https://link.springer.com/chapter/10.1007/978-3-030-70928-0_4"]} {"year":"2021","title":"Auxiliary Bi-Level Graph Representation for Cross-Modal Image-Text Retrieval","authors":["X Zhong, Z Yang, M Ye, W Huang, J Yuan, CW Lin - 2021 IEEE International …, 2021"],"snippet":"… The scene graph features Soi and Srij are transformed by a learnable embedding layer which is initialized by GloVe [18] pre-trained on the Common-Crawl dataset, and maps Ioi and Irij into a vector of same dimension: Soi = WoIoi , Srij = WrIrij , (1) …","url":["https://ieeexplore.ieee.org/abstract/document/9428380/"]} {"year":"2021","title":"Auxiliary Learning for Relation Extraction","authors":["S Lyu, J Cheng, X Wu, L Cui, H Chen, C Miao - IEEE Transactions on Emerging …, 2020"],"snippet":"… 7https://catalog.ldc.upenn.edu/LDC2018T24 8http://semeval2.fbk.eu/semeval2. php?location=data 9Following previous work, we choose GloVe word vectors with 300 dimensions (Common Crawl) https://nlp.stanford.edu/projects/glove …","url":["https://ieeexplore.ieee.org/abstract/document/9296307/"]} {"year":"2021","title":"Background Knowledge in Schema Matching: Strategy vs. Data","authors":["J Portisch, M Hladik, H Paulheim - arXiv preprint arXiv:2107.00001, 2021"],"snippet":"… used. WebIsALOD is a large hypernymy graph based on the WebIsA database [37]. The latter is a dataset which consists of hypernymy relations extracted from the Common Crawl, a large set of crawled Web pages. The extraction …","url":["https://arxiv.org/pdf/2107.00001"]} {"year":"2021","title":"Bambara Language Dataset for Sentiment Analysis","authors":["M Diallo, C Fourati, H Haddad - arXiv preprint arXiv:2108.02524, 2021"],"snippet":"… In this paper, we present the first common-crawl-based Bambara dialectal dataset dedicated for Sentiment Analysis, available freely for Natural Language Processing research purposes … Bambara V1 dataset represents …","url":["https://arxiv.org/pdf/2108.02524"]} {"year":"2021","title":"Bandits Don't Follow Rules: Balancing Multi-Facet Machine Translation with Multi-Armed Bandits","authors":["J Kreutzer, D Vilar, A Sokolov - arXiv preprint arXiv:2110.06997, 2021"],"snippet":"Training data for machine translation (MT) is often sourced from a multitude of large corpora that are multi-faceted in nature, eg containing contents from multiple domains or different levels of quality or complexity. Naturally, these facets do not …","url":["https://arxiv.org/pdf/2110.06997"]} {"year":"2021","title":"BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese","authors":["NL Tran, DM Le, DQ Nguyen - arXiv preprint arXiv:2109.09701, 2021"],"snippet":"… rization. Here, mBART is pre-trained on a Common Crawl dataset of 25 languages, which contains 137 GB of syllablelevel Vietnamese texts. We employ the single-document summarization dataset VNDS (Nguyen et al. 2019 …","url":["https://arxiv.org/pdf/2109.09701"]} {"year":"2021","title":"belabBERT: a Dutch RoBERTa-based language model applied to psychiatric classification","authors":["J Wouts, J de Boer, A Voppel, S Brederoo… - arXiv preprint arXiv …, 2021"],"snippet":"… 3.1.1. Pre-training For the pre-training of belabBERT we used the OSCAR corpus which consists of a set of monolingual corpora extracted from Common Crawl snapshots … belabBERT Common Crawl Dutch (non-shuffled) BytePairEncoding 95.92 ∗ …","url":["https://arxiv.org/pdf/2106.01091"]} {"year":"2021","title":"Benchmarking Differential Privacy and Federated Learning for BERT Models","authors":["P Basu, TS Roy, R Naidu, Z Muftuoglu, S Singh… - arXiv preprint arXiv …, 2021"],"snippet":"… It uses 160 GB of text for pre-training, including 16GB of Books Corpus and English Wikipedia used in BERT. The additional data included CommonCrawl News dataset, Web text corpus and Stories from Common Crawl. For …","url":["https://arxiv.org/pdf/2106.13973"]} {"year":"2021","title":"BERT, mBERT, or BiBERT? A Study on Contextualized Embeddings for Neural Machine Translation","authors":["H Xu, B Van Durme, K Murray - arXiv preprint arXiv:2109.04588, 2021"],"snippet":"… on 145G German text data portion of OSCAR (Or- tiz Suárez et al., 2020), a huge multilingual corpus extracted from Common Crawl … Vaswani et al., 2017) masked language model trained on 100 languages, using more than …","url":["https://arxiv.org/pdf/2109.04588"]} {"year":"2021","title":"BERT: A Review of Applications in Natural Language Processing and Understanding","authors":["MV Koroteev - arXiv preprint arXiv:2103.11943, 2021"],"snippet":"Page 1. BERT: A Review of Applications in Natural Language Processing and Understanding Koroteev MV, Financial University under the government of the Russian Federation, Moscow, Russia mvkoroteev@fa.ru Abstract: In …","url":["https://arxiv.org/pdf/2103.11943"]} {"year":"2021","title":"Bertinho: Galician BERT Representations","authors":["D Vilares, M Garcia, C Gómez-Rodríguez - arXiv preprint arXiv:2103.13799, 2021"],"snippet":"Page 1. Bertinho: Galician BERT Representations Bertinho: Representaciones BERT para el gallego David Vilares,1 Marcos Garcia,2 Carlos Gómez-Rodr´ıguez 1 1Universidade da Coru˜na, CITIC, Galicia, Spain 2CiTIUS, Universidade …","url":["https://arxiv.org/pdf/2103.13799"]} {"year":"2021","title":"Better Neural Machine Translation by Extracting Linguistic Information from BERT","authors":["HS Shavarani, A Sarkar - arXiv preprint arXiv:2104.02831, 2021"],"snippet":"… clack, clack.”). 9Europarl+CommonCrawl+NewsCommentary https://www.statmt. org/wmt14/translation-task.html, please note that in the later years this training set remained the same, but ParaCrawl data was added to it. We …","url":["https://arxiv.org/pdf/2104.02831"]} {"year":"2021","title":"Beyond Noise: Mitigating the Impact of Fine-grained Semantic Divergences on Neural Machine Translation","authors":["E Briakou, M Carpuat - arXiv preprint arXiv:2105.15087, 2021"],"snippet":"Page 1. Beyond Noise: Mitigating the Impact of Fine-grained Semantic Divergences on Neural Machine Translation Eleftheria Briakou and Marine Carpuat Department of Computer Science University of Maryland College Park …","url":["https://arxiv.org/pdf/2105.15087"]} {"year":"2021","title":"Beyond the English Web: Zero-Shot Cross-Lingual and Lightweight Monolingual Classification of Registers","authors":["L Repo, V Skantsi, S Rönnqvist, S Hellström… - arXiv preprint arXiv …, 2021"],"snippet":"… FreCORE and SweCORE are random samples of the 2017 CoNLL datasets (Ginter et al., 2017) originally drawn from Common Crawl … XLM-R is trained on 2.5TB of filtered Common Crawl (Wenzek et al., 2020) data comprising …","url":["https://arxiv.org/pdf/2102.07396"]} {"year":"2021","title":"Bias Silhouette Analysis: Towards Assessing the Quality of Bias Metrics for Word Embedding Models","authors":["M Spliethöver, H Wachsmuth"],"snippet":"… Word Embedding Models. As biased and unbiased models, we use GloVe CommonCrawl [Pennington et al., 2014] trained on 840 billion English tokens and the English ConceptNet Numberbatch 19.08 [Speer et al., 2017] (referred to as NBatch below), respectively …","url":["https://www.ijcai.org/proceedings/2021/0077.pdf"]} {"year":"2021","title":"Bidirectional Language Modeling: A Systematic Literature Review","authors":["M Shah Jahan, HU Khan, S Akbar, M Umar Farooq… - Scientific Programming, 2021"],"snippet":"Page 1. Review Article Bidirectional Language Modeling: A Systematic Literature Review Muhammad Shah Jahan ,1 Habib Ullah Khan ,2 Shahzad Akbar ,3 Muhammad Umar Farooq ,1 Sarah Gul ,4 and Anam Amjad 1 1Department …","url":["https://www.hindawi.com/journals/sp/2021/6641832/"]} {"year":"2021","title":"Bilingual Lexical Induction for Sinhala-English using Cross Lingual Embedding Spaces","authors":["A Liyanage, S Ranathunga, S Jayasena - 2021 Moratuwa Engineering Research …, 2021"],"snippet":"… Using pre-trained fastText embeddings trained on Wikipedia and Common crawl data using two different evaluation dictionaries as a preliminary experiment to identify the performance of embeddings created from non-comparable corpora …","url":["https://ieeexplore.ieee.org/abstract/document/9525667/"]} {"year":"2021","title":"Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment","authors":["H Shi, L Zettlemoyer, SI Wang - arXiv preprint arXiv:2101.00148"],"snippet":"Page 1. Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment Haoyue Shi ∗ TTI-Chicago freda@ttic.edu Luke Zettlemoyer University of Washington Facebook AI Research lsz@fb.com …","url":["https://arxiv.org/pdf/2101.00148"]} {"year":"2021","title":"BitextEdit: Automatic Bitext Editing for Improved Low-Resource Machine Translation","authors":["E Briakou, SI Wang, L Zettlemoyer, M Ghazvininejad - arXiv preprint arXiv …, 2021"],"snippet":"Mined bitexts can contain imperfect translations that yield unreliable training signals for Neural Machine Translation (NMT). While filtering such pairs out is known to improve final model quality, we argue that it is suboptimal in low-resource conditions …","url":["https://arxiv.org/pdf/2111.06787"]} {"year":"2021","title":"Blank spots, critical information needs and local journalism fund-ing","authors":["S Bisiani"],"snippet":"Abstract A global business model crisis in journalism, fuelled by loss in advertising revenue, challenges the survival of local news production. In Sweden, it has led to the closure of several newspapers across the country, and the concentration of …","url":["http://compscjournalism.org/projects/simona/projects/Master_Thesis_Simona_Bisiani.pdf"]} {"year":"2021","title":"Book genre and author's gender recognition based on titles","authors":["A Pawłowski, E Herden, T Walkowiak - … and Text: Data, models, information and …, 2021"],"snippet":""} {"year":"2021","title":"BOSS: Bandwidth-Optimized Search Accelerator for Storage-Class Memory","authors":["J Heo, SY Lee, S Min, Y Park, SJ Jung, TJ Ham…"],"snippet":"Page 1. BOSS: Bandwidth-Optimized Search Accelerator for Storage-Class Memory Jun Heo, Seung Yul Lee, Sunhong Min, Yeonhong Park, Sung Jun Jung, Tae Jun Ham, Jae W. Lee Seoul National University {j.heo, triomphant1 …","url":["https://conferences.computer.org/iscapub/pdfs/ISCA2021-4ghucdBnCWYB7ES2Pe4YdT/333300a279/333300a279.pdf"]} {"year":"2021","title":"Bottom-Up Shift and Reasoning for Referring Image Segmentation","authors":["S Yang, M Xia, G Li, HY Zhou, Y Yu - Proceedings of the IEEE/CVF Conference on …, 2021"],"snippet":"Page 1. Bottom-Up Shift and Reasoning for Referring Image Segmentation Sibei Yang1∗† Meng Xia2∗ Guanbin Li2 Hong-Yu Zhou3 Yizhou Yu3,4† 1ShanghaiTech University 2Sun Yat-sen University 3The University of Hong Kong 4Deepwise AI Lab Abstract …","url":["https://openaccess.thecvf.com/content/CVPR2021/papers/Yang_Bottom-Up_Shift_and_Reasoning_for_Referring_Image_Segmentation_CVPR_2021_paper.pdf"]} {"year":"2021","title":"bradleypallen/keras-quora-question-pairs","authors":["ONDJF Mar, AMJJA Sep, OSMTW Thu, F Sat"],"snippet":"… Model, Source of Word Embeddings, Accuracy. \"BiMPM model\" [5], GloVe Common Crawl (840B tokens, 300D), 0.88 … \"Decomposable attention\" [6], \"Quora's text corpus\", 0.86. \"LDC\" [5], GloVe Common Crawl (840B tokens, 300D), 0.86 …","url":["https://giters.com/bradleypallen/keras-quora-question-pairs?amp=1"]} {"year":"2021","title":"Building a File Observatory for Secure Parser Development","authors":["T Allison, W Burke, C Mattmann, A Mensikova…"],"snippet":"… 3196–3200. [Online]. Available: http://www.lrec-conf.org/proceedings/lrec2012/pdf/534 Paper.pdf [10] “Common Crawl,” https://commoncrawl.org. [11] P. Wyatt, “Stressful PDF corpus grows!” https://www.pdfa.org/ stressful-pdf-corpus-grows/, November 2020.","url":["https://langsec.org/spw21/papers/Allison_LangSec21.pdf"]} {"year":"2021","title":"Building a Question and Answer System for News Domain","authors":["S Basu, A Gaddala, P Chetan, G Tiwari, N Darapaneni… - arXiv preprint arXiv …, 2021"],"snippet":"… We have used two approaches for building the Embedding Layers for the models 3. GloVe Embedding: we used the 300 Dimension Common Crawl for the English language 4. Universal Sentence Encoder: we used the 512 …","url":["https://arxiv.org/pdf/2105.05744"]} {"year":"2021","title":"Building Accountable Natural Language Processing Models: on Social Bias Detection and Mitigation","authors":["J Zhao - 2021"],"snippet":"Natural Language Processing (NLP) plays an important role in many applications, including resume filtering, text analysis, and information retrieval. Despite the remarkable accuracy enabled by the advances of machine learning methods, recent …","url":["https://escholarship.org/content/qt0441n1tt/qt0441n1tt.pdf"]} {"year":"2021","title":"But how robust is RoBERTa actually?: A Benchmark of SOTA Transformer Networks for Sexual Harassment Detection on Twitter","authors":["P Basu, TS Roy, A Singhal - 2021 Fifth International Conference on I-SMAC (IoT in …, 2021"],"snippet":"Harassment, which is of sexual/physical in nature, is defined as any unwanted sexual misconduct, including the unwarranted and ill-suited promise of benefit in exchange for sexual indulgence. It also includes a span of actions from verbal …","url":["https://ieeexplore.ieee.org/abstract/document/9640861/"]} {"year":"2021","title":"Can I Take Your Subdomain? Exploring Same-Site Attacks in the Modern Web","authors":["MSMTL Veronese, SCM Maffei"],"snippet":"Page 1. Can I Take Your Subdomain? Exploring Same-Site Attacks in the Modern Web Marco Squarcina1 Mauro Tempesta1 Lorenzo Veronese1 Stefano Calzavara2 Matteo Maffei1 1 TU Wien 2 Università Ca' Foscari Venezia & OWASP …","url":["https://minimalblue.com/data/papers/USENIX21_can_i_take_your_subdomain.pdf"]} {"year":"2021","title":"Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color","authors":["M Abdou, A Kulmizev, D Hershcovich, S Frank… - arXiv preprint arXiv …, 2021"],"snippet":"… Word-type FastText embeddings trained on Common Crawl (Bojanowski et al., 2017) … These S contexts are either randomly sampled from common crawl (RC), or deterministically generated to allow for control over contextual variation (CC) …","url":["https://arxiv.org/pdf/2109.06129"]} {"year":"2021","title":"Can Small and Synthetic Benchmarks Drive Modeling Innovation? A Retrospective Study of Question Answering Modeling Approaches","authors":["NF Liu, T Lee, R Jia, P Liang - arXiv preprint arXiv:2102.01065, 2021"],"snippet":"Page 1. Can Small and Synthetic Benchmarks Drive Modeling Innovation? A Retrospective Study of Question Answering Modeling Approaches Nelson F. Liu Tony Lee Robin Jia Percy Liang Computer Science Department, Stanford …","url":["https://arxiv.org/pdf/2102.01065"]} {"year":"2021","title":"CausalBERT: Injecting Causal Knowledge Into Pre-trained Models with Minimal Supervision","authors":["Z Li, X Ding, K Liao, T Liu, B Qin - arXiv preprint arXiv:2107.09852, 2021"],"snippet":"… ambiguity and precise causal patterns to extract word level causeeffect pairs from the preprocessed English Common Crawl corpus (5.14 … (2016) for creating a causal lexical knowledge base, we reproduce a variant of their …","url":["https://arxiv.org/pdf/2107.09852"]} {"year":"2021","title":"CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training","authors":["P Huber, A Aghajanyan, B Oğuz, D Okhonko, W Yih… - arXiv preprint arXiv …, 2021"],"snippet":"… Consequently, we propose a novel QA dataset based on the Common Crawl project in this paper. Using the readily available schema.org annotation, we extract around 130 million multilingual question-answer pairs, including about 60 million …","url":["https://arxiv.org/pdf/2110.07731"]} {"year":"2021","title":"CDA: a Cost Efficient Content-based Multilingual Web Document Aligner","authors":["T Vu, AA AI, A Moschitti - 2021"],"snippet":"… CommonCrawl Sextet Previous datasets share the same domains that are heavily biased toward French content (see Table 3). We leverage a monthly crawl from CommonCrawl, specifically … Table 4: Parallel English tokens …","url":["https://assets.amazon.science/01/69/5f786b844c08a079eda7e6437c16/cda-a-cost-efficient-content-based-multilingual-web-document-aligner.pdf"]} {"year":"2021","title":"Censorship of Online Encyclopedias: Implications for NLP Models","authors":["E Yang, ME Roberts - arXiv preprint arXiv:2101.09294, 2021"],"snippet":"… Word embeddings are also useful because they can be pre-trained on large corpuses of text like Wikipedia or Common Crawl, and these pre-trained embeddings can then be used as an initial layer in applications that may have less training data …","url":["https://arxiv.org/pdf/2101.09294"]} {"year":"2021","title":"Challenges for cognitive decoding using deep learning methods","authors":["AW Thomas, C Ré, RA Poldrack - arXiv preprint arXiv:2108.06896, 2021"],"snippet":"… learning in 251 the target domain. Transfer learning has been especially successful in CV and NLP, where large 252 publicly available datasets exist (eg, [72,73] and http://www.commoncrawl.org). Here, DL 253 models are first …","url":["https://arxiv.org/pdf/2108.06896"]} {"year":"2021","title":"Changing the World by Changing the Data","authors":["A Rogers - arXiv preprint arXiv:2105.13947, 2021"],"snippet":"… The use of uncontrolled samples (like the Common-Crawl-based corpora) would have to be justified by arguing either that the above types of bias can be safely ignored, or that the benefits outweigh the risks. 2.2.3 Might not be the best approach …","url":["https://arxiv.org/pdf/2105.13947"]} {"year":"2021","title":"Characterizing and addressing the issue of oversmoothing in neural autoregressive sequence modeling","authors":["I Kulikov, M Eremeev, K Cho - arXiv preprint arXiv:2112.08914, 2021"],"snippet":"… We use the subset of WMT’19 training set consisting of news commentary v12 and common crawl resulting in slightly more than 1M and 2M training sentence pairs for Ru→En and De↔En pairs, respectively. We fine-tuned single model checkpoints …","url":["https://arxiv.org/pdf/2112.08914"]} {"year":"2021","title":"Characterizing Network Infrastructure Using the Domain Name System","authors":["P Kintis - 2020"],"snippet":"Page 1. CHARACTERIZING NETWORK INFRASTRUCTURE USING THE DOMAIN NAME SYSTEM A Dissertation Presented to The Academic Faculty By Panagiotis Kintis In Partial Fulfillment of the Requirements for the Degree …","url":["https://smartech.gatech.edu/bitstream/handle/1853/64165/KINTIS-DISSERTATION-2020.pdf"]} {"year":"2021","title":"Charformer: Fast Character Transformers via Gradient-based Subword Tokenization","authors":["Y Tay, VQ Tran, S Ruder, J Gupta, HW Chung, D Bahri… - arXiv preprint arXiv …, 2021"],"snippet":"… In addition, we compare to the byte-level models from §3.1, which we pre-train on multilingual data. Setup We pre-train CHARFORMER as well as the Byte-level T5 and Byte-level T5+LASC baselines on multilingual …","url":["https://arxiv.org/pdf/2106.12672"]} {"year":"2021","title":"ChineseBERT: Chinese Pretraining Enhanced by Glyph and Pinyin Information","authors":["Z Sun, X Li, X Sun, Y Meng, X Ao, Q He, F Wu, J Li - arXiv preprint arXiv:2106.16038, 2021"],"snippet":"… We collected our pretraining data from CommonCrawl5. After pre-processing (such as removing the data with too much … ERNIE BERT-wwm MacBERT ChineseBERT Data Source Heterogeneous Wikipedia Heterogeneous …","url":["https://arxiv.org/pdf/2106.16038"]} {"year":"2021","title":"Claim Detection in Biomedical Twitter Posts","authors":["A Wührl, R Klinger - arXiv preprint arXiv:2104.11639, 2021"],"snippet":"… The first three models (NB, LG, BiLSTM) use 50-dimensional FastText (Bojanowski et al., 2017) embeddings trained on the Common Crawl corpus (600 billion tokens) as input6. NB. We use a (Gaussian) naive Bayes with …","url":["https://arxiv.org/pdf/2104.11639"]} {"year":"2021","title":"Classification of Emotions Based on Text and Qualitative Variables","authors":["J Dobša, D Šebalj, D Bužić"],"snippet":"… Experiments were done both with Common Crawl GloVe pretrained vectors with the dimensionality of 300, and without pretrained vectors. Fifteen percent of learning samples were used for validation. We constructed six neural networks models: CNN …","url":["https://www.researchgate.net/profile/Jasminka-Dobsa/publication/355461190_Classification_of_Emotions_Based_on_Text_and_Qualitative_Variables/links/61716c97750da711ac647d77/Classification-of-Emotions-Based-on-Text-and-Qualitative-Variables.pdf"]} {"year":"2021","title":"Classification of Horror Stories from Reddit","authors":["D Zhou, C Kim, S Gatiganti"],"snippet":"… We further hypothesize that performance would still increase a little if we used the larger pre-trained vectors such as Common Crawl or Twitter sets, but they come with increased download sizes (>1 GB) and increased training time …","url":["http://cs229.stanford.edu/proj2021spr/report2/82008167.pdf"]} {"year":"2021","title":"Classification of Texts Using a Vocabulary of Synonyms","authors":["A Giliazova - 2021 14th International Conference Management of …, 2021"],"snippet":"… This is a Transformer-based masked language model trained on one hundred languages, including Russian language, using more than two terabytes of filtered CommonCrawl data. The XLM-R model significantly outperforms multilingual BERT (mBERT) …","url":["https://ieeexplore.ieee.org/abstract/document/9600131/"]} {"year":"2021","title":"CLASSIFICATION OF TWEETS USING MULTIPLE THRESHOLDS WITH SELF-CORRECTION AND WEIGHTED CONDITIONAL","authors":["TN Ahmad - 2020"],"snippet":"Page 1. CLASSIFICATION OF TWEETS USING MULTIPLE THRESHOLDS WITH SELF-CORRECTION AND WEIGHTED CONDITIONAL PROBABILITIES A thesis submitted to The University of Manchester for the degree of Doctor of Philosophy …","url":["https://www.research.manchester.ac.uk/portal/files/188959099/FULL_TEXT.PDF"]} {"year":"2021","title":"Classification-based Quality Estimation: Small and Efficient Models for Real-world Applications","authors":["S Sun, A El-Kishky, V Chaudhary, J Cross, F Guzmán… - arXiv preprint arXiv …, 2021"],"snippet":"… Current state of the art QE systems (Fomicheva et al., 2020b; Ranasinghe et al., 2020a; Sun et al., 2020). are built on XLM-R (Conneau et al., 2019), a contextualized language model pre-trained on more than 2 terabytes of …","url":["https://arxiv.org/pdf/2109.08627"]} {"year":"2021","title":"Classifying Fake and Real Neurally Generated News","authors":["A Govindaraju, J Griffith - 2021 Swedish Workshop on Data Science (SweDS), 2021"],"snippet":"… In order to train and test the model, 3 datasets have been created: One containing real news extracted from a common crawl; the second comprises a neural fake news dataset generated using language modelling techniques; the third comprises a …","url":["https://ieeexplore.ieee.org/abstract/document/9638268/"]} {"year":"2021","title":"CLEF eHealth Evaluation Lab 2021","authors":["L Kelly, LA Alemany, N Brew-Sam, V Cotik, D Filippo…"],"snippet":"… This collection consists of Web pages acquired from Common Crawl,14 which is augmented with additional pages collected from a number of known reliable health Websites and other known unreliable health Websites [9]. The topics …","url":["https://www.researchgate.net/profile/Marco-Viviani/publication/350569762_CLEF_eHealth_Evaluation_Lab_2021/links/6073f32e92851c8a7bbea835/CLEF-eHealth-Evaluation-Lab-2021.pdf"]} {"year":"2021","title":"Click This, Not That: Extending Web Authentication with Deception","authors":["T Barron, J So, N Nikiforakis - Proceedings of the 2021 ACM Asia Conference on …, 2021"],"snippet":"… after creation. References. 2020. Common Crawl. https://commoncrawl.org/the-data/ get-started/Google Scholar Google Scholar; 2020. Mouseflow: Session Replay, Heatmaps, Funnels, Forms & User Feedback. https://mouseflow …","url":["https://dl.acm.org/doi/abs/10.1145/3433210.3453088"]} {"year":"2021","title":"ClimateBert: A Pretrained Language Model for Climate-Related Text","authors":["N Webersinke, M Kraus, JA Bingler, M Leippold - arXiv preprint arXiv:2110.12010, 2021"],"snippet":"… 2019), and a subset of CommonCrawl that is said to resemble the storylike style of WINOGRAD schemas (Trinh and Le, 2019). While these sources are valuable to build a model working on general language, it has been shown that domain-specific …","url":["https://arxiv.org/pdf/2110.12010"]} {"year":"2021","title":"CLIP2StyleGAN: Unsupervised Extraction of StyleGAN Edit Directions","authors":["R Abdal, P Zhu, J Femiani, NJ Mitra, P Wonka - arXiv preprint arXiv:2112.05219, 2021"],"snippet":"… The CLIP image encoder [34] is trained on the common-crawl dataset, an internet-scale set of images that encompasses a broad range of visual concepts. However, a typical high-quality GAN would be trained on a more specific set of images, for example …","url":["https://arxiv.org/pdf/2112.05219"]} {"year":"2021","title":"Cluster analysis of agricultural household production of self-employed","authors":["AV Plotnikov - IOP Conference Series: Earth and Environmental …, 2021"],"snippet":"… To train this model, we used a sample of Russian-language documents from the CommonCrawl dump, balanced by geography, compiled by Jonathan Dunn and Ben Adams; the corpus Size is 2.1 billion words. Page 5. AGRITECH-IV-2020 IOP Conf …","url":["https://iopscience.iop.org/article/10.1088/1755-1315/677/2/022080/pdf"]} {"year":"2021","title":"Cluster-Based Antiphishing (CAP) Model for Smart Phones","authors":["M Faisal, S Abed - Scientific Programming"],"snippet":"… latest techniques tested on UCI datasets. 4.4.2. Dataset Taken from Mendeley. Source Phishing web page: Phish Tank, Legitimate web page source: Alexa, Common Crawl (1) Dataset Information. In this scenario, the dataset …","url":["https://www.hindawi.com/journals/sp/2021/9957323/"]} {"year":"2021","title":"Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots","authors":["S Tan, S Joty - arXiv preprint arXiv:2103.09593, 2021"],"snippet":"… However, the latter trend is replicated for BUMBLEBEE if we remove this constraint (Table 14 in Appendix G). A possible explanation is that XLM-R and Unicoder were trained on monolingual CommonCrawl (CC) data, while …","url":["https://arxiv.org/pdf/2103.09593"]} {"year":"2021","title":"CoDesc: A Large Code–Description Parallel Dataset","authors":["M Hasan, T Muttaqueen, A Al Ishtiaq, KS Mehrab…"],"snippet":"… 9052–9065, Online. Association for Computational Linguistics. CommonCrawl Common crawl. https:// commoncrawl.org/. Accessed: 2021-01-31. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019 …","url":["http://masumhasan.net/files/CoDesc.pdf"]} {"year":"2021","title":"Combining Natural Language Processing and Machine Learning for Profiling and Fake News Detection","authors":["A Bondielli"],"snippet":"Page 1. PHD PROGRAM IN SMART COMPUTING DIPARTIMENTO DI INGEGNERIA DELL'INFORMAZIONE (DINFO) Combining Natural Language Processing and Machine Learning for Profiling and Fake News Detection Alessandro Bondielli …","url":["https://flore.unifi.it/bitstream/2158/1244287/1/PhDThesis_AlessandroBondielli.pdf"]} {"year":"2021","title":"Combining Pre-trained Word Embeddings and Linguistic Features for Sequential Metaphor Identification","authors":["R Mao, C Lin, F Guerin - arXiv preprint arXiv:2104.03285, 2021"],"snippet":"… For instance, GloVe was trained on Common Crawl2, from billions of web pages (840 billion tokens); ELMo was trained on WMT 2011 News Crawl data3 (800 million tokens); BERT was trained on Wikipedia4 (2.5 billion tokens) …","url":["https://arxiv.org/pdf/2104.03285"]} {"year":"2021","title":"Combining word embeddings as a tool for subject identification","authors":["A Hamm - Wissensaustauschworkshop\" Maschinelles Lernen VII\", 2021"],"snippet":"This talk shows ungoing work aiming at finding subject matter relations between text documents and word clouds. A number of increasingly successful semantic word embedding procedures - learning semantic relations from contextual distributions …","url":["https://elib.dlr.de/147623/1/Combining%20word%20embeddings.pdf"]} {"year":"2021","title":"Comparative Analysis of Bengali Stop Word Detection Using Different Approaches","authors":["RJ Rupa, JF Sohana, M Rahman - … on Automation, Control and Mechatronics for …, 2021"],"snippet":"… In this paper, the pretrained Bengali FastText CBOW model is utilized to produce word vectors trained on the common crawl and Wikipedia [27] and both logistic regression and support vector machine classifiers acquire a performance score of 86%. TABLE XII …","url":["https://ieeexplore.ieee.org/abstract/document/9528279/"]} {"year":"2021","title":"Comparative Analysis of Different Transformer Based Architectures Used in Sentiment Analysis","authors":["K Pipalia, R Bhadja, M Shukla - 2020 9th International Conference System Modeling …, 2020"],"snippet":"… Distill BERT Base:66 BookCorpus wiki BERT Distillation T5 Base:220 large:770 Colossal Clean Crawled Corpus (C4) Text Infilling XLNet Base:~110 Large:~340 BookCorpus Wiki, Giga5 ClueWeb, Common Crawl …","url":["https://ieeexplore.ieee.org/abstract/document/9337081/"]} {"year":"2021","title":"Comparing Apples and Oranges: Human and Computer Clustered Affinity Diagrams Under the Microscope","authors":["P Borlinghaus, S Huber - 26th International Conference on Intelligent User …, 2021"],"snippet":"… training corpora WSD no OOV LSI [9] − NMF [18] − LDA [5] − GloVe [26] Wiki word2vec [22] Google News corpus doc2vec [17] 900k sentences from qualitative survey ◦ fastText [6] Common Crawl, Wiki • … FastText was trained on Common Crawl and Wikipedia corpus …","url":["https://dl.acm.org/doi/abs/10.1145/3397481.3450674"]} {"year":"2021","title":"Comparing Contextualised Embeddings for Predicting the (Graded) Effect of Context in Word Similarity","authors":["JM Albers - 2021"],"snippet":"… As data set XLM-RoBERTa uses CommonCrawl instead of Wikipedia, which provides limited scale for low resource languages. 4 Page 5 … The CommonCrawl data set is designed to be more diverse than other data sets, which mainly use Wikipedia and books …","url":["https://dspace.library.uu.nl/bitstream/handle/1874/406113/6400507_JorisAlbers_Thesis.pdf?sequence=1"]} {"year":"2021","title":"Comparing Encoder-Decoder Architectures for Neural Machine Translation: A Challenge Set Approach","authors":["C Doan - 2021"],"snippet":"Machine translation (MT) as a field of research has known significant advances in recent years, with the increased interest for neural machine translation (NMT). By combining deep learning with translation, researchers have been able to deliver …","url":["https://ruor.uottawa.ca/bitstream/10393/42936/1/Doan_Coraline_2021_thesis.pdf"]} {"year":"2021","title":"Comparing general and specialized word embeddings for biomedical named entity recognition","authors":["RE Ramos-Vargas, I Román-Godínez, S Torres-Ramos - PeerJ Computer Science, 2021"],"snippet":"… 01-14 Received 2020-11-05 Academic Editor Susan Gauch Subject Areas Bioinformatics, Artificial Intelligence, Computational Linguistics Keywords Word embeddings, BioNER, BiLSTM-CRF, DrugBank, MedLine, Pyysalo …","url":["https://peerj.com/articles/cs-384/"]} {"year":"2021","title":"Comparing the Performance of NLP Toolkits and Evaluation measures in Legal Tech","authors":["MZ Khan, J Mitrovic, JMPDM Granitzer - 2021"],"snippet":"Page 1. Lehrstuhl für Data Science Comparing the Performance of NLP Toolkits and Evaluation measures in Legal Tech Masterarbeit von Muhammad Zohaib Khan Supervised By: Prof. Dr. Jelena Mitrovic 1. Prüfer 2. Prüfer …","url":["https://www.academia.edu/download/65887417/Deep_Neural_Language_Modelling_in_Law.pdf"]} {"year":"2021","title":"Comparing Traditional and Neural Approaches for Detecting Health-Related Misinformation","authors":["D Elsweiler - … IR Meets Multilinguality, Multimodality, and Interaction …","M Fernández-Pichel, DE Losada, JC Pichel…"],"snippet":"… Table 1 reports the main statistics of the resulting datasets. We also tested classifiers for the task of distinguishing between useful documents for non-expert end users (ie, trustworthy and readable) and non-useful …","url":["http://persoal.citius.usc.es/jcpichel/docs/2021_CLEF_MFernandezPichel.pdf","https://books.google.de/books?hl=en&lr=lang_en&id=p9FCEAAAQBAJ&oi=fnd&pg=PA78&dq=commoncrawl&ots=eNycpv3vEv&sig=v7CAPFEmV26pL2Lhj2R2t581gZ0"]} {"year":"2021","title":"Comparison of Czech Transformers on Text Classification Tasks","authors":["J Lehečka, J Švec - arXiv preprint arXiv:2107.10042, 2021"],"snippet":"… Researchers from Facebook have published multilingual XLM-RoBERTa model [3] pre-trained on one hundred languages (including Czech), using more than two terabytes of filtered Common Crawl data … 2. 1https …","url":["https://arxiv.org/pdf/2107.10042"]} {"year":"2021","title":"Compilation and Validation of a Large Fake News Dataset in Hungarian","authors":["M Gencsi, Z Bodó, A Szenkovits - 2021 IEEE 19th International Symposium on …, 2021"],"snippet":"… The huBERT model was trained on the Hungarian subset of the Common Crawl and a snapshot of the Hungarian Wikipedia, while the multilingual model was trained on the top 104 languages with the largest Wikipedias, among them also …","url":["https://ieeexplore.ieee.org/abstract/document/9582484/"]} {"year":"2021","title":"Comprehensive analysis of embeddings and pre-training in NLP","authors":["JK Tripathy, SC Sethuraman, MV Cruz, A Namburu… - Computer Science Review, 2021"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S1574013721000733"]} {"year":"2021","title":"Comprehensive Evaluation of Word Embeddings for Highly Inflectional Language","authors":["P Drozda, K Sopyla, J Lewalski - International Conference on Computational …, 2021"],"snippet":"… The obtained results showed that in terms of accuracy the Facebook fasttext model learned on the Common Crawl collection should be considered the best model under assumptions of experimental session. Keywords. Word …","url":["https://link.springer.com/chapter/10.1007/978-3-030-88113-9_48"]} {"year":"2021","title":"Comprehensive Multi-Modal Interactions for Referring Image Segmentation","authors":["K Jain, V Gandhi - arXiv preprint arXiv:2104.10412, 2021"],"snippet":"… 576. At 448 × 448 resolution, H = W = 14 and at 576 × 576 resolution, H = W = 18. We use GLoVe embeddings [17] pre-trained on Common Crawl 840B tokens to initialize word embedding for words in the expressions. The …","url":["https://arxiv.org/pdf/2104.10412"]} {"year":"2021","title":"Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation","authors":["M Deng, B Tan, Z Liu, EP Xing, Z Hu - arXiv preprint arXiv:2109.06379, 2021"],"snippet":"Page 1. Compression, Transduction, and Creation: A Unified Framework for Evaluating Natural Language Generation Mingkai Deng1∗, Bowen Tan1∗, Zhengzhong Liu1,2, Eric P. Xing1,2,3, Zhiting Hu4 1Carnegie Mellon University …","url":["https://arxiv.org/pdf/2109.06379"]} {"year":"2021","title":"Computational analysis and synthesis of song lyrics","authors":["P Březinová - 2021"],"snippet":"… It uses CMUdict for phonetic transcription, analyzes CommonCrawl8 web data repository for forced rhymes, Google Books Ngrams (Weiss [2015]) for building language model, and WordNet 3.0 (Pearson et al. [2005]) for semantic relations …","url":["https://dspace.cuni.cz/bitstream/handle/20.500.11956/147665/120397406.pdf?sequence=1"]} {"year":"2021","title":"Computational Challenges for Artificial Intelligence and Machine Learning in Environmental Research","authors":["M Werner, G Dax, M Laass - INFORMATIK 2020, 2021"],"snippet":"… This includes news streams, social media messages, human-curated knowledge such as OpenStreetMap and Wikipedia, opinionated data sources such as blog posts from certain platforms, or blind web scale data collections such as common crawl …","url":["https://dl.gi.de/bitstream/handle/20.500.12116/34809/C21-1.pdf?sequence=1&isAllowed=y"]} {"year":"2021","title":"Computational filling of curatorial gaps in a fine arts exhibition","authors":["A Flexer"],"snippet":"… Please note that we translate all keywords from German to English for this paper. We use the German fasttext5 word em- bedding, which has been trained on about 3 million words from the Wikipediaand 19 million words …","url":["https://computationalcreativity.net/iccc21/wp-content/uploads/2021/09/ICCC_2021_paper_75reduced.pdf"]} {"year":"2021","title":"Computational methods to understand the association between emojis and emotions","authors":["AAM Shoeb - 2021"],"snippet":"Page 1. © 2021 Abu Awal Md Shoeb ALL RIGHTS RESERVED Page 2. COMPUTATIONAL METHODS TO UNDERSTAND THE ASSOCIATION BETWEEN EMOJIS AND EMOTIONS By ABU AWAL MD SHOEB A dissertation submitted to the School of Graduate Studies …","url":["https://rucore.libraries.rutgers.edu/rutgers-lib/65975/PDF/1/"]} {"year":"2021","title":"Computer Science Review","authors":["JK Tripathy, SC Sethuraman, MV Cruz, V Vijayakumar - 2021"],"snippet":"abstract The amount of data and computing power has drastically increased over the last decade, which leads to the development of several new fronts in the field of Natural Language Processing (NLP). In addition to that, the entanglement of …","url":["https://www.researchgate.net/profile/Mangalraj-Poobalasubramanian/publication/355132427_Comprehensive_analysis_of_embeddings_and_pre-training_in_NLP/links/6164f98e1eb5da761e836888/Comprehensive-analysis-of-embeddings-and-pre-training-in-NLP.pdf"]} {"year":"2021","title":"CoMSum and SIBERT: A Dataset and Neural Model for Query-Based Multi-document Summarization","authors":["S Kulkarni, S Chammas, W Zhu, F Sha, E Ie - International Conference on Document …, 2021"],"snippet":"… We use the cleaned Common Crawl (CC) corpus [32] to source relevant web documents that are diverse and multi-faceted for generating Natural Questions (NQ) (long-form) answers [21]. Figure 1 illustrates the overall procedure …","url":["https://link.springer.com/chapter/10.1007/978-3-030-86331-9_6"]} {"year":"2021","title":"Concept-Based Label Embedding via Dynamic Routing for Hierarchical Text Classification","authors":["X Wang, L Zhao, B Liu, T Chen, F Zhang, D Wang"],"snippet":"… Hyper-parameters are tuned on a validation set by grid search. We take Stanford's publicly available GloVe 300-dimensional embeddings trained on 42 billion tokens from Common Crawl (Pennington et al., 2014) as initialization for word em- beddings …","url":["https://aclanthology.org/2021.acl-long.388.pdf"]} {"year":"2021","title":"Confused by Path: Analysis of Path Confusion Based Attacks","authors":["SA Mirheidari - 2020"],"snippet":"… 93 iii Page 10. Page 11. List of Tables 4.1 Sample Grouped Web pages. . . . . 29 4.2 Narrowing down the Common Crawl to the candidate set used in our analysis (from left to right). . . . 36 4.3 Vulnerable pages and sites in the candidate set …","url":["https://iris.unitn.it/retrieve/handle/11572/280512/382175/phd_unitn_Seyed%20Ali_Mirheidari.pdf"]} {"year":"2021","title":"ConRPG: Paraphrase Generation using Contexts as Regularizer","authors":["Y Meng, X Ao, Q He, X Sun, Q Han, F Wu, J Li - arXiv preprint arXiv:2109.00363, 2021"],"snippet":"… We implement the above models, ie p(−→ci|ci), p(←−ci|ci), p(ci), p(c>i|c