ACL-OCL / Base_JSON /prefixW /json /wassa /2021.wassa-1.25.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:07:17.156199Z"
},
"title": "ONE: Toward ONE model, ONE algorithm, ONE corpus dedicated to sentiment analysis of Arabic/Arabizi and its dialects",
"authors": [
{
"first": "Imane",
"middle": [],
"last": "Guellil",
"suffix": "",
"affiliation": {
"laboratory": "SEA Research group",
"institution": "Aston university / Birmingham",
"location": {
"country": "UK"
}
},
"email": "[email protected]"
},
{
"first": "Faical",
"middle": [],
"last": "Azouaou",
"suffix": "",
"affiliation": {
"laboratory": "Laboratoire des M\u00e9thodes de Conception des Syst\u00e8mes (LMCS)",
"institution": "Oued-Smar",
"location": {
"postCode": "BP 68M, 16309",
"settlement": "Alger",
"country": "Alg\u00e9rie"
}
},
"email": ""
},
{
"first": "Fodil",
"middle": [],
"last": "Benali",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ala-Eddine",
"middle": [],
"last": "Hachani",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Arabic is the official language of 22 countries, spoken by more than 400 million speakers. Each one of this country uses at least on dialect for daily life conversation. Then, Arabic has at least 22 dialects. Each dialect can be written in Arabic or Arabizi Scripts. The most recent researches focus on constructing a language model and a training corpus for each dialect, in each script. Following this technique means constructing 46 different resources (by including the Modern Standard Arabic, MSA) for handling only one language. In this paper, we extract ONE corpus, and we propose ONE algorithm to automatically construct ONE training corpus using ONE classification model architecture for sentiment analysis MSA and different dialects. After manually reviewing the training corpus, the obtained results outperform all the research literature results for the targeted test corpora.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Arabic is the official language of 22 countries, spoken by more than 400 million speakers. Each one of this country uses at least on dialect for daily life conversation. Then, Arabic has at least 22 dialects. Each dialect can be written in Arabic or Arabizi Scripts. The most recent researches focus on constructing a language model and a training corpus for each dialect, in each script. Following this technique means constructing 46 different resources (by including the Modern Standard Arabic, MSA) for handling only one language. In this paper, we extract ONE corpus, and we propose ONE algorithm to automatically construct ONE training corpus using ONE classification model architecture for sentiment analysis MSA and different dialects. After manually reviewing the training corpus, the obtained results outperform all the research literature results for the targeted test corpora.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "All the survey works in the literature (Habash, 2010; Farghaly and Shaalan, 2009; Harrat et al., 2017) classify Arabic in three main varieties: 1) Classical Arabic (CA), 2) Modern Standard Arabic (MSA) and 3) Dialectal Arabic (Boudad et al., 2017) . Arabic Dialects are another form of Arabic used in daily life communication. Each dialect shares many features with MSA, but they globally differ in some aspects. Arabic and its dialects can be written either in Arabic Script or in Arabizi one. Arabizi is a form of writing Arabic text that relies on Latin letters, numerals and punctuation rather than Arabic letters (Guellil et al., 2019a,b) . For ex-ample, the Arabic sentence:",
"cite_spans": [
{
"start": 39,
"end": 53,
"text": "(Habash, 2010;",
"ref_id": "BIBREF30"
},
{
"start": 54,
"end": 81,
"text": "Farghaly and Shaalan, 2009;",
"ref_id": "BIBREF19"
},
{
"start": 82,
"end": 102,
"text": "Harrat et al., 2017)",
"ref_id": "BIBREF31"
},
{
"start": 226,
"end": 247,
"text": "(Boudad et al., 2017)",
"ref_id": "BIBREF12"
},
{
"start": 618,
"end": 643,
"text": "(Guellil et al., 2019a,b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": ", meaning \"I am happy,\" is written in Arabizi as \"rani fer7ana\". Arabizi is generally used by Arab speakers in social media or chat and SMS applications. Almost all the work on Arabic sentiment analysis focus on constructing new resources (new lexicons(Abdul-Mageed and Diab, 2012; Mataoui et al., 2016; Mohammad et al., 2016a; Gilbert et al., 2018) , new training corpora (Aly and Atiya, 2013; ElSahar and El-Beltagy, 2015; Mourad and Darwish, 2013; Rahab et al., 2019; Alahmary et al., 2019; Al-Twairesh et al., 2017) , new language models (Baly et al., 2020) ) for each dialect. More recently, particular attention has been given to Arabizi as well (Baert et al., 2020) . However, constructing a unique resource for each dialect is time and effort consuming. Moreover, this resource will be exploitable only for the targeted dialect.",
"cite_spans": [
{
"start": 239,
"end": 281,
"text": "(new lexicons(Abdul-Mageed and Diab, 2012;",
"ref_id": null
},
{
"start": 282,
"end": 303,
"text": "Mataoui et al., 2016;",
"ref_id": "BIBREF35"
},
{
"start": 304,
"end": 327,
"text": "Mohammad et al., 2016a;",
"ref_id": "BIBREF38"
},
{
"start": 328,
"end": 349,
"text": "Gilbert et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 373,
"end": 394,
"text": "(Aly and Atiya, 2013;",
"ref_id": "BIBREF8"
},
{
"start": 395,
"end": 424,
"text": "ElSahar and El-Beltagy, 2015;",
"ref_id": "BIBREF18"
},
{
"start": 425,
"end": 450,
"text": "Mourad and Darwish, 2013;",
"ref_id": "BIBREF40"
},
{
"start": 451,
"end": 470,
"text": "Rahab et al., 2019;",
"ref_id": "BIBREF44"
},
{
"start": 471,
"end": 493,
"text": "Alahmary et al., 2019;",
"ref_id": "BIBREF6"
},
{
"start": 494,
"end": 519,
"text": "Al-Twairesh et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 542,
"end": 561,
"text": "(Baly et al., 2020)",
"ref_id": "BIBREF11"
},
{
"start": 652,
"end": 672,
"text": "(Baert et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper proposes a general algorithm constructing a language model from a large corpus and a training corpus automatically to bridge the gap. It also proposes the transliteration of the Arabizi messages into Arabic. This approach was applied to Algerian dialect (a Maghrebi dialect), having a lack of resources. However, the constructed model was used for classifying the sentiment of messages written in MSA, Tunisian dialect or even Egyptian dialect. The results were very encouraging. However, the manual review of a small part of the training corpus constructed automatically leads to outperform all the research literature results for the testing corpora cited above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The aim of the proposed model is to analyse the sentiment of Arabic message (written with both Arabic/Arabizi scripts). In this context, we need to focus on three categories of works: 1) Works on Arabizi transliteration. 2) Works on lexicon-based approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The research works inspiring the proposed work",
"sec_num": "2"
},
{
"text": "3) Works on corpus-based approach. In the following sections, we present the set of strengths/weaknesses of the research works inspiring our proposed approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The research works inspiring the proposed work",
"sec_num": "2"
},
{
"text": "The transliteration approach is firstly inspired by the work presented by van et al. (van der Wees et al., 2016) , where the authors used a table extracted from Wikipedia 1 for the passage from Arabizi to Arabic. We also present a passage table from Arabizi to Arabic. However, we also use a set of passage rules for handling the position of letters and some missed cases in the literature studied approaches. The proposed approach is also inspired by the works presented in (Al-Badrashiny et al., 2014; Darwish, 2013; May et al., 2014; van der Wees et al., 2016) . All these authors generate a set of possible candidates for the transliteration of an Arabizi word into Arabic. The major issue of these approaches is the omission of some candidates because the vowels are not properly handled. Finally, This work is also inspired by the proposed approach in (Darwish, 2014; van der Wees et al., 2016 ) using a language model to determine the best possible candidate for a word in Arabizi. On the other hand, these works assimilate the task of transliteration to a translation task. The major drawback of these approaches is that they depend on a parallel corpus. The used corpus is usually constructed manually.",
"cite_spans": [
{
"start": 85,
"end": 112,
"text": "(van der Wees et al., 2016)",
"ref_id": "BIBREF46"
},
{
"start": 475,
"end": 503,
"text": "(Al-Badrashiny et al., 2014;",
"ref_id": "BIBREF4"
},
{
"start": 504,
"end": 518,
"text": "Darwish, 2013;",
"ref_id": "BIBREF13"
},
{
"start": 519,
"end": 536,
"text": "May et al., 2014;",
"ref_id": "BIBREF36"
},
{
"start": 537,
"end": 563,
"text": "van der Wees et al., 2016)",
"ref_id": "BIBREF46"
},
{
"start": 858,
"end": 873,
"text": "(Darwish, 2014;",
"ref_id": "BIBREF14"
},
{
"start": 874,
"end": 899,
"text": "van der Wees et al., 2016",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The research works inspiring our transliteration approach",
"sec_num": "2.1"
},
{
"text": "The proposed sentiment lexicon construction approach is firstly inspired by the group of approaches using the automatic translation of an existing English lexicon (Mohammad et al., 2016a; Salameh et al., 2015; Mohammad et al., 2016b; Abdul-Mageed and Diab, 2012; Abdulla et al., 2014) . The majority of these approaches are based on Google translate. However, Google translate deals with MSA only (i.e. Google translate is not adequate for translating dialects). Moreover, the Arabic/English dictionaries are covering MSA and some dialects such as Egyptian and Levantine. Limited resources are dedicated to Maghrebi dialects such as Tunisian, Moroccan or Algerian dialects. Hence, we opt to use Glosbe API 2 , which is an online API offering the translation from/to MSA and almost all dialects. This API is open-source (i.e. no fees are required for using it). The proposed approach is also inspired by the work using a semi-automatic construction (El-Beltagy, 2016) where the authors manually review the constructed lexicon.",
"cite_spans": [
{
"start": 163,
"end": 187,
"text": "(Mohammad et al., 2016a;",
"ref_id": "BIBREF38"
},
{
"start": 188,
"end": 209,
"text": "Salameh et al., 2015;",
"ref_id": "BIBREF45"
},
{
"start": 210,
"end": 233,
"text": "Mohammad et al., 2016b;",
"ref_id": "BIBREF39"
},
{
"start": 234,
"end": 262,
"text": "Abdul-Mageed and Diab, 2012;",
"ref_id": "BIBREF0"
},
{
"start": 263,
"end": 284,
"text": "Abdulla et al., 2014)",
"ref_id": "BIBREF1"
},
{
"start": 948,
"end": 966,
"text": "(El-Beltagy, 2016)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The research works inspiring our lexicon-based approach",
"sec_num": "2.2"
},
{
"text": "For handling morphological aspects of Arabic dialects, some works relying on stemming tools, dedicated to MSA. For example, the work of Mataoui et al. (Mataoui et al., 2016) used the Khoja stemmer (Khoja and Garside, 1999) for stemming the DALG, which is designed for MSA. In our work, we treat agglutination by employing an algorithm that supports the originality of the studied dialect (DALG), principally related to its prefixes, suffixes, and negative pronouns. The work of Al-Twairesh et al. (Al-Twairesh et al., 2017) also inspires the proposed approach. This work is relying on sentiments words for automatically annotating large corpus in Saudi dialects. However, in contrast to this work, our approach is not only concentrating on sentiment words, but it is also based on a sentiment algorithm for handling opposition, Arabic morphology and negation.",
"cite_spans": [
{
"start": 136,
"end": 173,
"text": "Mataoui et al. (Mataoui et al., 2016)",
"ref_id": "BIBREF35"
},
{
"start": 197,
"end": 222,
"text": "(Khoja and Garside, 1999)",
"ref_id": "BIBREF34"
},
{
"start": 497,
"end": 523,
"text": "(Al-Twairesh et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The research works inspiring our lexicon-based approach",
"sec_num": "2.2"
},
{
"text": "The works that firstly inspire our proposed (Pak and Paroubek, 2010; Hogenboom et al., 2013; Yadav and Pandya, 2017) are not dedicated to Arabic but other languages (English and Dutch) The main idea of these works is to use emoticons for automatically tag a large corpus. Hence, the proposed contribution also exploits the presence of emoticons to determine the sentiment of messages. However, it can be seen that all emoticons are not appropriate for determining sentiment. Hence, our proposed approach also considers emoticons for annotation but not all emoticons, only the emoticons with the strongest sentiment (either positive or negative). Our approach of constructing corpus is also inspired by the work of Gamal et al. (Gamal et al., 2019) that they relied on a sentiment lexicon to automatically annotate a sentiment corpus. However, their algorithm relies only on the positive and negative words count. For these authors, if the number of positives words is higher than or equal twice the number of negatives words than the message is considered as positive, and the same philosophy is applied for the negative messages. In contrast to these authors, we developed more sophisticated algorithms handling Arabic agglutination, opposition and negation. We also consider a set of heuristics, including the number of words.",
"cite_spans": [
{
"start": 44,
"end": 68,
"text": "(Pak and Paroubek, 2010;",
"ref_id": "BIBREF42"
},
{
"start": 69,
"end": 92,
"text": "Hogenboom et al., 2013;",
"ref_id": "BIBREF32"
},
{
"start": 93,
"end": 116,
"text": "Yadav and Pandya, 2017)",
"ref_id": "BIBREF47"
},
{
"start": 727,
"end": 747,
"text": "(Gamal et al., 2019)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The research works inspiring our corpus-based approach",
"sec_num": "2.3"
},
{
"text": "Our contribution is also inspired by the work of Medhaffar et al. (Medhaffar et al., 2017) , which is the unique work, to the best of our knowledge, focusing on Arabic and Arabizi at the same time. However, in contrast to this work, we used a more voluminous corpus (which was constructed automatically), and we propose a transliteration step. Finally, our contribution is also inspired by the approach proposed by Duwairi et al. (Duwairi et al., 2016) . Hence, we firstly define and apply a transliteration step. However, in contrast to this work, our contribution is dealing with ambiguities treatment (especially vowels ambiguities), and our corpus sentiment is constructed automatically, so it is more voluminous than the corpora which the authors constructed manually.",
"cite_spans": [
{
"start": 66,
"end": 90,
"text": "(Medhaffar et al., 2017)",
"ref_id": "BIBREF37"
},
{
"start": 415,
"end": 452,
"text": "Duwairi et al. (Duwairi et al., 2016)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The research works inspiring our corpus-based approach",
"sec_num": "2.3"
},
{
"text": "The general algorithm proposed and developed in the context of this work is presented in Algorithm 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed algorithm",
"sec_num": "3.1"
},
{
"text": "It can be seen from Algorithm 1 that the proposed steps are executed in the following order :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed algorithm",
"sec_num": "3.1"
},
{
"text": "1. The first step is to manually construct some resources including the list of the identifiers of some famous Algerian pages, the list of positive/negative emoticons and expressions, the list of prefixes/suffixes and finally the list of negation/ opposition terms. This step is illustrated by the function MANUALRESCON-STRUCTION().",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed algorithm",
"sec_num": "3.1"
},
{
"text": "2. The second step is to automatically extract comments from Facebook pages (using the collected identifiers). This step is illustrated by the function COMMENTSEXTRACTION(Facebook key ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed algorithm",
"sec_num": "3.1"
},
{
"text": "3. The third step is to automatically construct the Algerian sentiment lexicon by relying on an existing English sentiment lexicon. This step is illustrated by the function AUTOMATICARLEXCONSTRUCT(Eng lex ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed algorithm",
"sec_num": "3.1"
},
{
"text": "4. The fourth step is to review the constructed lexicon manually. This step is illustrated by the function MANUALLEXREVIEW(Alg lexV1 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The proposed algorithm",
"sec_num": "3.1"
},
{
"text": "annotate each message from the corpus (extracted from Facebook). This step is illustrated by the function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The fifth step is to automatically",
"sec_num": "5."
},
{
"text": "ANNOTATE(Alg lexV2 , m, L emp , L emn , L exp , L exn , L pr , L s f , L neg , L op , pos, neg).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The fifth step is to automatically",
"sec_num": "5."
},
{
"text": "6. The sixth step is to transliterate each message in the used Arabizi corpus. This step is illustrated by the function ARABIZ-ITRANSLITERATE(CORPUS). For translteration we rely on the same algorithm proposed and used by Guellil et al. (Guellil et al., 2018c (Guellil et al., , 2020a (Guellil et al., , 2018a 7. The last step is to classify the sentiment (written with Arabic script) in both corpora (the initially Arabic one and the transliterated one). This step is illustrated by the function SENTIMENTCLASS(corpus, Senti A lg).",
"cite_spans": [
{
"start": 236,
"end": 258,
"text": "(Guellil et al., 2018c",
"ref_id": "BIBREF25"
},
{
"start": 259,
"end": 283,
"text": "(Guellil et al., , 2020a",
"ref_id": "BIBREF24"
},
{
"start": 284,
"end": 308,
"text": "(Guellil et al., , 2018a",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The fifth step is to automatically",
"sec_num": "5."
},
{
"text": "For classification, we use two kinds of algorithms, shallow and deep. For both classifications, we extract features with word embedding techniques. With shallow classification, ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The used models for classification",
"sec_num": "3.2"
},
{
"text": "Alg lexV1 : Algerian Lexicon V1, Alg lexV2 : Algerian lexicon V2, Ar corp1 ,Ar corp2 : Large Arabic corpora, Senti Alg : Automatic annotated Algerian (Arabic) corpus L f : List identifiant of Facebook pages, L emp : List of positive emoticons, L emn : List of negative emoticons, L exp : List of positive expressions, L exn : List of negative expressions, L pr : List of prefixes, L s f : List of suffixes, L neg : List of negation terms, L op : List of opposition terms 3 1: Senti Alg \u2190 \u2205 2: L f , L emp , L emn , L exp , L exn , L pr , L s f , L neg , L op \u2190MANUALRESCONSTRUCTION() 3: Ar corp1 ,Ar corp2 \u2190COMMENTSEXTRACTION(Facebook key ) 4: Alg lexV1 \u2190AUTOMATICARLEXCONSTRUCT(Eng lex ) 5: Alg lexV2 \u2190MANUALLEXREVIEW(Alg lexV1 ) 6: for each m \u2208 Ar corp2 do 7: polarity\u2190 ANNOTATE(Alg lexV2 , m, L emp , L emn , L exp , L exn , L pr , L s f , L neg , L op ) 8:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The used models for classification",
"sec_num": "3.2"
},
{
"text": "add m, polarity in Senti Alg 9: end for 10: for each corpus \u2208 ArTest corp do 11: SENTIMENTCLASS(corpus, Senti Alg ) 12: end for 13: for each corpus \u2208 ArabiziTest corp do 14:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The used models for classification",
"sec_num": "3.2"
},
{
"text": "Corpus tr \u2190 ARABIZITRANSLITERATE(corpus, Ar corp1 ) 15: ADD(ArabiziTrTest corp , Corpus tr ) 16: SENTIMENTCLASSIFICATION(Corpus tr , Senti Alg )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The used models for classification",
"sec_num": "3.2"
},
{
"text": "17: end for we use Word2vec algorithm. While we use fastText for deep classification. For Word2vec, we used a context of 10 words to produce representations for both CBOW and SG of length 300. For classification we use five Algorithms such as: GaussianNB (GNB), LogisticRegression (LR), RandomForset (RF), SGDClassifier (SGD, with loss='log' and penalty='l1') and LinearSVC (LSVC with C='1e1'). For deep learning classification, we first used the model presented by Attia et al. (Attia et al., 2018) with five layers using 300 filters and a width equal to 7. To enrich this model, our approach also uses the CBOW and SG of FastText for calculating the weights of embedding matrix. Also, our approach used other deep learning algorithms, such as LSTM and Bi-LSTM. Table 1 gives more details about the configuration and architecture of the layers of our models on the training corpus. For all the models, we use an epoch equal to 100 with an early stopping parameter. This parameter is used for stopping the iteration in the absence of improvements (for handling overfitting). This parameter al-lows stopping the models after 20 epochs (on average). Adam optimiser is used in all the deep learning experiments.",
"cite_spans": [
{
"start": 479,
"end": 499,
"text": "(Attia et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 763,
"end": 770,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "The used models for classification",
"sec_num": "3.2"
},
{
"text": "For the experiments part, the following dataset were used: 1) A large corpus (Ar corpus2), extracted in November 2017, and containing 15,407,910 messages with 7,926,504 written in Arabic letters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "2) ALG Senti (Guellil et al., 2018b (Guellil et al., , 2020a is an annotated sentiment corpus which was automatically constructed based on AL-GLex V2 (Guellil et al., 2020b ) and on the sentiment algorithm that we proposed and implemented. The annotation process also considers other features such as the sentiment score of the message and the number of positives/negatives words recognised in the lexicon. The final corpus contains 127,004 positive messages and 127,004 negative ones.",
"cite_spans": [
{
"start": 13,
"end": 35,
"text": "(Guellil et al., 2018b",
"ref_id": "BIBREF23"
},
{
"start": 36,
"end": 60,
"text": "(Guellil et al., , 2020a",
"ref_id": "BIBREF24"
},
{
"start": 150,
"end": 172,
"text": "(Guellil et al., 2020b",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "3) TSAC 4 (Medhaffar et al., 2017 ) is a Tunisian sentiment corpus. This corpus is the unique corpus in the research literature, to the best of our knowledge, containing both Arabic and Arabizi. For testing our approach on other corpus presented in the research literature, we propose to transliterate the Arabizi part of TSAC into Arabic, using our transliteration approach. 4) SANA Alg 5 (Rahab et al., 2019) is an Algerian annotated sentiment corpus. This corpus includes 513 messages that were manually annotated. 5) ASTD/QCRI/ARTwitter 6 (Altowayan and Tao, 2016) is an Arabic corpus including 1,589 tweets from astd (Nabil et al., 2015) , 1, 951 tweets from ArTwitter (Abdulla et al., 2013) and 754 from QCRI (Mourad and Darwish, 2013) ",
"cite_spans": [
{
"start": 10,
"end": 33,
"text": "(Medhaffar et al., 2017",
"ref_id": "BIBREF37"
},
{
"start": 390,
"end": 410,
"text": "(Rahab et al., 2019)",
"ref_id": "BIBREF44"
},
{
"start": 622,
"end": 642,
"text": "(Nabil et al., 2015)",
"ref_id": "BIBREF41"
},
{
"start": 674,
"end": 696,
"text": "(Abdulla et al., 2013)",
"ref_id": null
},
{
"start": 715,
"end": 741,
"text": "(Mourad and Darwish, 2013)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dataset",
"sec_num": "4.1"
},
{
"text": "The aim of this work is to classify an Arabic message into positive/negative automatically. More particularly to use a language model and the resources constructed for one dialect for classifying the sentiments of an another dialect (and MSA). Hence, for validating our approach , we applied it on four corpora annotated manually by natives speakers. Two of these corpora are in Algerian dialect (Senti Alg (Guellil et al., 2018b,a) and Sana Alg (Rahab et al., 2019) ), one of them is in MSA (ASTD QCRI ArTwitter)(Altowayan and Tao, 2016) and the last one in Tunisian dialect (TSAC) (Medhaffar et al., 2017) . Two of these corpora include both Arabic and Arabizi (Senti Alg and TSAC) and the others are dedicated to Arabic script. Our purpose behind the different experiments is not only to validate our approach but to also highlight its adaptability to MSA and other dialects written with both scripts Arabic and Arabizi. For doing so, we apply the following steps:",
"cite_spans": [
{
"start": 407,
"end": 432,
"text": "(Guellil et al., 2018b,a)",
"ref_id": null
},
{
"start": 446,
"end": 466,
"text": "(Rahab et al., 2019)",
"ref_id": "BIBREF44"
},
{
"start": 583,
"end": 607,
"text": "(Medhaffar et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.2"
},
{
"text": "1. For Senti Alg, we focus on both sides, Arabic and Arabizi. For Arabizi part, we investigate the results using both automatic and manual transliteration.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.2"
},
{
"text": "2. As Sana Alg and ASTD QCRI ARTwitter use only Arabic script, no need for the transliteration process.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.2"
},
{
"text": "3. As TSAC Test represents a combination between Arabic and Arabizi messages, for each experiment on TSAC, we use both, the initial test (Initial test) corpus and the test corpus obtained after applying the proposed translietration system (Transliterated test).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.2"
},
{
"text": "The different experiments and the obtained results are presented in the following sections. ). However, the results obtained using Word2vec model combined with shallow classifiers outperform those obtained using fast-Text model combined with deep learning classifiers. The results obtained by using the corpus transliterated manually (Senti_Alg_ test_trmanu) are better than those obtained on the corpus transliterated automatically ((Senti_Alg_test_trauto). However, the improvement between both transliterations is non-consequential (0.9, less than 1 point For F1-score). This small improvement rate highlights the quality of the proposed transliteration system. More details are presented (in the Appendices, section 7) in the Table 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 730,
"end": 737,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Experimental results",
"sec_num": "4.2"
},
{
"text": "For the experiments, we use both versions of the Tunisian corpus. We denote the version in its current state (before transliteration) as TSAC test. We denote the version after proceeding to the transliteration as TSAC Test Tr.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on Tunisian dialect",
"sec_num": "4.2.2"
},
{
"text": "To compare the sentiment analysis results obtained before and after transliteration step, we divide However, the results obtained using Word2vec model combined with shallow classifiers outperform those obtained using FastText model combined with deep learning classifiers. More details are presented (in the Appendices, Section 7) in the Table 6) 5 Synthesis and corpus validation",
"cite_spans": [],
"ref_spans": [
{
"start": 338,
"end": 346,
"text": "Table 6)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on Tunisian dialect",
"sec_num": "4.2.2"
},
{
"text": "The best results obtained from the different experiments and that we discussed in Section 4.2 are summarised in Table 2. For Algerian dialect, the corpora that we used (i.e. Senti_Alg_test_Arabic, Senti_ Alg_test_trauto and Senti_Alg_test_trmanu) were presented and used in many research papers (Guellil et al., 2017 (Guellil et al., , 2018b Imane et al., 2019) . We based on the issues of each presented research work to improve the results presented in this paper (where the best F1= 87.77% for the Arabic side and F=76.13% for the Arabizi side, after transliteration). The best results obtained on SANA Alg are up to 81.00% (for F1-score). This result outperforms the results presented in the research literature, where the F1-score presented by Rahab et al. .(Rahab et al., 2019 ) was up to 75%. Hence, our approach and corpus lead to an improvement of 6% on this corpus.",
"cite_spans": [
{
"start": 295,
"end": 316,
"text": "(Guellil et al., 2017",
"ref_id": "BIBREF27"
},
{
"start": 317,
"end": 341,
"text": "(Guellil et al., , 2018b",
"ref_id": "BIBREF23"
},
{
"start": 342,
"end": 361,
"text": "Imane et al., 2019)",
"ref_id": "BIBREF33"
},
{
"start": 749,
"end": 782,
"text": "Rahab et al. .(Rahab et al., 2019",
"ref_id": "BIBREF44"
}
],
"ref_spans": [
{
"start": 112,
"end": 120,
"text": "Table 2.",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Synthesis",
"sec_num": "5.1"
},
{
"text": "For Tunisian dialect, it can be seen that the results obtained by using the corpus transliterated (TSAC_Test_Tr) are relatively better than those obtained on the initial corpus (TSAC_ Test) (without transliteration). Medhaffar et al. (Medhaffar et al., 2017) obtained an F1score up to 78% for TSAC Test corpus. Our best results by using our approach on the corpus (Senti Alg) is up to 75.24% (F1-score). The results are then comparable to the results obtained by the authors (even with a corpus constructed automatically and dedicated to Algerian dialect). However, our transliteration system drastically improves the results. The results are up to 91.59% after transliterating both the training and the testing corpus (by using TSAC train for the training). Hence an improvement of 14% was observed on this corpus. Another interesting observation is that, except for the training corpus, all the approach and corpora used for TSAC corpus are the same that we used for our other experiments. The vast corpus used for training Word2vec and fastText dedicated to Al-gerian dialect. The language model used for extracting the best candidate transliteration was also dedicated to Algerian dialect. Finally, concerning MSA, we opt for using the corpus ASTD/QCRI/ArTwitter (Altowayan and Tao, 2016). The best results obtained by Altowayen et al. (Altowayan and Tao, 2016) are up to 79.62% (for F1-score). It can be seen from Table 6 that the best results that we obtained are up to 80.58% (for F1-score). Moreover, This corpus is dedicated to MSA with a focus on Egyptian dialect (for ASTD). Hence, our approach and corpus, which are dedicated to Algerian dialect, outperforms the results presented for corpora dedicated to MSA and Egyptian dialect.",
"cite_spans": [
{
"start": 217,
"end": 258,
"text": "Medhaffar et al. (Medhaffar et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 1419,
"end": 1427,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synthesis",
"sec_num": "5.1"
},
{
"text": "To validate the constructed corpus automatically, we focus on a sample containing 3,048 messages (1,488 positives and 1,560 negatives). Afterwards, we manually review this sample. The messages that are correctly annotated were kept, and the messages which were wrongly annotated were corrected. Our first observation is that, among the 3,048 messages that are manually reviewed, 85.17% (2,596 messages) were correctly annotated. To the best of our knowledge, this corpus is the first manually checked annotated sentiment corpus that handles DALG as well as MSA. For showing the efficiency of the manually reviewed corpus, we present Table 3 . Almost all the results were improved with the corpus, which was reviewed manually. The best F1 on Senti Alg test Arabic is now up to 90.16 (it was up to 87.77 with Senti Alg auto). The best F1 on Senti Alg test trauto is now up to 80.95 (it was up to 75.23 with Senti Alg auto). The best F1 on Senti Alg test trmanu is now up to 83.10 (it was up to 76.13 with Senti Alg auto). The best F1 on ASTD/QCRI/ARTwitter is now up to 81.75 (it was up to 80.58 with Senti Alg auto). The decrease for SANA Alg is insignificant, where the best F1 was up to 81.00, and now it is up to 80.97.",
"cite_spans": [],
"ref_spans": [
{
"start": 633,
"end": 640,
"text": "Table 3",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Manual corpus validation",
"sec_num": "5.2"
},
{
"text": "Concerning the experiments on Tunisian corpus (TSAC), It can be seen from ) . These results outperform the results presented in the research literature ((Medhaffar et al., 2017) ), where the bestpresented F1 was up to 78.00. Hence, the manual reviewing of a corpus which was initially constructed automatically outperforms all the results presented in the research literature.",
"cite_spans": [
{
"start": 152,
"end": 177,
"text": "((Medhaffar et al., 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [
{
"start": 74,
"end": 75,
"text": ")",
"ref_id": null
}
],
"eq_spans": [],
"section": "Manual corpus validation",
"sec_num": "5.2"
},
{
"text": "The major contribution in this paper is the new perspectives that it opens:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Automatic training corpus construction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Using one language model trained for one dialect to MSA and either to other dialects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Moreover, using the training corpus of one dialect to others (which is a case of transfer learning).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Stop handling Arabizi as it is. Translitertaion is crucial for improving the results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Moreover, only simple word embedding models were used (word2vec and fastText). It was for showing the efficacy of the approach even with the fastest models. However, in the future, we are planning to improve this approach with the most recent models such as BERT (Devlin et al., 2018) ",
"cite_spans": [
{
"start": 263,
"end": 284,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://en.wikipedia.org/wiki/Arabic chat alphabet",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://en.glosbe.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/fbougares/TSAC 5 http://rahab.e-monsite.com/medias/files/corpus.rar 6 https://github.com/iamaziz/arembeddings/tree/master/datasets",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this section, more details about the obtained results on each model, corpus are given.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Toward building a large-scale arabic sentiment lexicon",
"authors": [
{
"first": "Muhammad",
"middle": [],
"last": "Abdul-Mageed",
"suffix": ""
},
{
"first": "Mona",
"middle": [],
"last": "Diab",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 6th international global WordNet conference",
"volume": "",
"issue": "",
"pages": "18--22",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Muhammad Abdul-Mageed and Mona Diab. 2012. Toward building a large-scale arabic sentiment lexicon. In Proceedings of the 6th international global WordNet conference, pages 18-22.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Automatic lexicon construction for arabic sentiment analysis",
"authors": [
{
"first": "Nawaf",
"middle": [],
"last": "Abdulla",
"suffix": ""
},
{
"first": "Salwa",
"middle": [],
"last": "Mohammed",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Al-Ayyoub",
"suffix": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Al-Kabi",
"suffix": ""
}
],
"year": 2014,
"venue": "Future Internet of Things and Cloud",
"volume": "",
"issue": "",
"pages": "547--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nawaf Abdulla, Salwa Mohammed, Mahmoud Al-Ayyoub, Mohammed Al-Kabi, et al. 2014. Au- tomatic lexicon construction for arabic sentiment analysis. In Future Internet of Things and Cloud (Fi- Cloud), 2014 International Conference on, pages 547- 552. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Arabic sentiment analysis: Lexicon-based and corpus-based",
"authors": [],
"year": null,
"venue": "Applied Electrical Engineering and Computing Technologies (AEECT), 2013 IEEE Jordan Conference on",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Arabic sentiment analysis: Lexicon-based and corpus-based. In Applied Electrical Engineering and Computing Technologies (AEECT), 2013 IEEE Jordan Conference on, pages 1-6. IEEE.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automatic transliteration of romanized dialectal arabic",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Al-Badrashiny",
"suffix": ""
},
{
"first": "Ramy",
"middle": [],
"last": "Eskander",
"suffix": ""
},
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
},
{
"first": "Owen",
"middle": [],
"last": "Rambow",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "30--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Al-Badrashiny, Ramy Eskander, Nizar Habash, and Owen Rambow. 2014. Automatic transliteration of romanized dialectal arabic. In Proceedings of the Eighteenth Conference on Computa- tional Natural Language Learning, pages 30-38.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Arasentitweet: A corpus for arabic sentiment analysis of saudi tweets",
"authors": [
{
"first": "Nora",
"middle": [],
"last": "Al-Twairesh",
"suffix": ""
},
{
"first": "Hend",
"middle": [],
"last": "Al-Khalifa",
"suffix": ""
},
{
"first": "Abdulmalik",
"middle": [],
"last": "Al-Salman",
"suffix": ""
},
{
"first": "Yousef",
"middle": [],
"last": "Al-Ohali",
"suffix": ""
}
],
"year": 2017,
"venue": "Procedia Computer Science",
"volume": "117",
"issue": "",
"pages": "63--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nora Al-Twairesh, Hend Al-Khalifa, AbdulMalik Al-Salman, and Yousef Al-Ohali. 2017. Arasenti- tweet: A corpus for arabic sentiment analysis of saudi tweets. Procedia Computer Science, 117:63- 72.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Sentiment analysis of saudi dialect using deep learning techniques",
"authors": [
{
"first": "",
"middle": [],
"last": "Rahma M Alahmary",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Hmood",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [
"Z"
],
"last": "Al-Dossari",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Emam",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 International Conference on Electronics, Information, and Communication (ICEIC)",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rahma M Alahmary, Hmood Z Al-Dossari, and Ahmed Z Emam. 2019. Sentiment analysis of saudi dialect using deep learning techniques. In 2019 International Conference on Electronics, Informa- tion, and Communication (ICEIC), pages 1-6. IEEE.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Word embeddings for arabic sentiment analysis",
"authors": [
{
"first": "Lixin",
"middle": [],
"last": "Aziz Altowayan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tao",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "3820--3825",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A Aziz Altowayan and Lixin Tao. 2016. Word embeddings for arabic sentiment analysis. In Big Data (Big Data), 2016 IEEE International Conference on, pages 3820-3825. IEEE.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Labr: A large scale arabic book reviews dataset",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Aly",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Atiya",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "494--498",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Aly and Amir Atiya. 2013. Labr: A large scale arabic book reviews dataset. In Proceed- ings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 494-498.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Multilingual multiclass sentiment classification using convolutional neural networks",
"authors": [
{
"first": "Mohammed",
"middle": [],
"last": "Attia",
"suffix": ""
},
{
"first": "Younes",
"middle": [],
"last": "Samih",
"suffix": ""
},
{
"first": "Ali",
"middle": [],
"last": "El-Kahky",
"suffix": ""
},
{
"first": "Laura",
"middle": [],
"last": "Kallmeyer",
"suffix": ""
}
],
"year": 2018,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammed Attia, Younes Samih, Ali El-Kahky, and Laura Kallmeyer. 2018. Multilingual multi- class sentiment classification using convolutional neural networks. In LREC.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Arabizi language models for sentiment analysis",
"authors": [
{
"first": "Ga\u00e9tan",
"middle": [],
"last": "Baert",
"suffix": ""
},
{
"first": "Souhir",
"middle": [],
"last": "Gahbiche",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Gadek",
"suffix": ""
},
{
"first": "Alexandre",
"middle": [],
"last": "Pauchet",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 28th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "592--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ga\u00e9tan Baert, Souhir Gahbiche, Guillaume Gadek, and Alexandre Pauchet. 2020. Arabizi language models for sentiment analysis. In Proceedings of the 28th International Conference on Computational Linguistics, pages 592-603.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Arabert: Transformer-based model for arabic language understanding",
"authors": [
{
"first": "Fady",
"middle": [],
"last": "Baly",
"suffix": ""
},
{
"first": "Hazem",
"middle": [],
"last": "Hajj",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection",
"volume": "",
"issue": "",
"pages": "9--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fady Baly, Hazem Hajj, et al. 2020. Arabert: Transformer-based model for arabic language un- derstanding. In Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, pages 9-15.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sentiment analysis in arabic: A review of the literature",
"authors": [
{
"first": "Naaima",
"middle": [],
"last": "Boudad",
"suffix": ""
},
{
"first": "Rdouan",
"middle": [],
"last": "Faizi",
"suffix": ""
}
],
"year": 2017,
"venue": "Ain Shams Engineering Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naaima Boudad, Rdouan Faizi, Rachid Oulad Haj Thami, and Raddouane Chiheb. 2017. Sentiment analysis in arabic: A review of the literature. Ain Shams Engineering Journal.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Arabizi detection and conversion to arabic",
"authors": [
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1306.6755"
]
},
"num": null,
"urls": [],
"raw_text": "Kareem Darwish. 2013. Arabizi detection and con- version to arabic. arXiv preprint arXiv:1306.6755.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Arabizi detection and conversion to arabic",
"authors": [
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)",
"volume": "",
"issue": "",
"pages": "217--224",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kareem Darwish. 2014. Arabizi detection and con- version to arabic. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Process- ing (ANLP), pages 217-224.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1810.04805"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language un- derstanding. arXiv preprint arXiv:1810.04805.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Sentiment analysis for arabizi text",
"authors": [
{
"first": "M",
"middle": [],
"last": "Rehab",
"suffix": ""
},
{
"first": "Mosab",
"middle": [],
"last": "Duwairi",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Alfaqeh",
"suffix": ""
},
{
"first": "Areen",
"middle": [],
"last": "Wardat",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Alrabadi",
"suffix": ""
}
],
"year": 2016,
"venue": "Information and Communication Systems (ICICS)",
"volume": "",
"issue": "",
"pages": "127--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rehab M Duwairi, Mosab Alfaqeh, Mohammad Wardat, and Areen Alrabadi. 2016. Sentiment analysis for arabizi text. In Information and Com- munication Systems (ICICS), 2016 7th International Conference on, pages 127-132. IEEE.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Nileulex: A phrase and word level sentiment lexicon for egyptian and modern standard arabic",
"authors": [
{
"first": "",
"middle": [],
"last": "Samhaa R El-Beltagy",
"suffix": ""
}
],
"year": 2016,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samhaa R El-Beltagy. 2016. Nileulex: A phrase and word level sentiment lexicon for egyptian and modern standard arabic. In LREC.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Building large arabic multi-domain resources for sentiment analysis",
"authors": [
{
"first": "Hady",
"middle": [],
"last": "Elsahar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Samhaa R El-Beltagy",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Intelligent Text Processing and Computational Linguistics",
"volume": "",
"issue": "",
"pages": "23--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hady ElSahar and Samhaa R El-Beltagy. 2015. Building large arabic multi-domain resources for sentiment analysis. In International Conference on Intelligent Text Processing and Computational Lin- guistics, pages 23-34. Springer.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Arabic natural language processing: Challenges and solutions",
"authors": [
{
"first": "Ali",
"middle": [],
"last": "Farghaly",
"suffix": ""
},
{
"first": "Khaled",
"middle": [],
"last": "Shaalan",
"suffix": ""
}
],
"year": 2009,
"venue": "ACM Transactions on Asian Language Information Processing (TALIP)",
"volume": "8",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ali Farghaly and Khaled Shaalan. 2009. Arabic natural language processing: Challenges and so- lutions. ACM Transactions on Asian Language Infor- mation Processing (TALIP), 8(4):14.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Twitter benchmark dataset for arabic sentiment analysis",
"authors": [
{
"first": "Donia",
"middle": [],
"last": "Gamal",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Alfonse",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "El-Sayed",
"suffix": ""
},
{
"first": "Abdel-Badeeh M",
"middle": [],
"last": "El-Horbaty",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salem",
"suffix": ""
}
],
"year": 2019,
"venue": "International Journal of Modern Education and Computer Science",
"volume": "11",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Donia Gamal, Marco Alfonse, El-Sayed M El- Horbaty, and Abdel-Badeeh M Salem. 2019. Twit- ter benchmark dataset for arabic sentiment anal- ysis. International Journal of Modern Education and Computer Science, 11(1):33.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Arsel: A large scale arabic sentiment and emotion lexicon",
"authors": [
{
"first": "Badaroand",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Jundiand",
"middle": [],
"last": "Hussein",
"suffix": ""
},
{
"first": "Hajj",
"middle": [],
"last": "Hazem",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Badaroand Gilbert, Jundiand Hussein, Hajj Hazem, El-Hajj Wassim, and Habash Nizar. 2018. Arsel: A large scale arabic sentiment and emotion lexicon.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Arabizi sentiment analysis based on transliteration and automatic corpus annotation",
"authors": [
{
"first": "Imane",
"middle": [],
"last": "Guellil",
"suffix": ""
},
{
"first": "Ahsan",
"middle": [],
"last": "Adeel",
"suffix": ""
},
{
"first": "Faical",
"middle": [],
"last": "Azouaou",
"suffix": ""
},
{
"first": "Fodil",
"middle": [],
"last": "Benali",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
"volume": "",
"issue": "",
"pages": "335--341",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imane Guellil, Ahsan Adeel, Faical Azouaou, Fodil Benali, Ala-eddine Hachani, and Amir Hus- sain. 2018a. Arabizi sentiment analysis based on transliteration and automatic corpus annotation. In Proceedings of the 9th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Me- dia Analysis, pages 335-341.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Sentialg: Automated corpus annotation for algerian sentiment analysis",
"authors": [
{
"first": "Imane",
"middle": [],
"last": "Guellil",
"suffix": ""
},
{
"first": "Ahsan",
"middle": [],
"last": "Adeel",
"suffix": ""
},
{
"first": "Faical",
"middle": [],
"last": "Azouaou",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Hussain",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1808.05079"
]
},
"num": null,
"urls": [],
"raw_text": "Imane Guellil, Ahsan Adeel, Faical Azouaou, and Amir Hussain. 2018b. Sentialg: Automated cor- pus annotation for algerian sentiment analysis. arXiv preprint arXiv:1808.05079.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "The role of transliteration in the process of arabizi translation/sentiment analysis",
"authors": [
{
"first": "Imane",
"middle": [],
"last": "Guellil",
"suffix": ""
},
{
"first": "Faical",
"middle": [],
"last": "Azouaou",
"suffix": ""
},
{
"first": "Fodil",
"middle": [],
"last": "Benali",
"suffix": ""
},
{
"first": "Ala",
"middle": [
"Eddine"
],
"last": "Hachani",
"suffix": ""
},
{
"first": "Marcelo",
"middle": [],
"last": "Mendoza",
"suffix": ""
}
],
"year": 2020,
"venue": "Recent Advances in NLP: The Case of Arabic Language",
"volume": "",
"issue": "",
"pages": "101--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imane Guellil, Faical Azouaou, Fodil Benali, Ala Eddine Hachani, and Marcelo Mendoza. 2020a. The role of transliteration in the process of arabizi translation/sentiment analysis. In Re- cent Advances in NLP: The Case of Arabic Language, pages 101-128. Springer.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Approche hybride pour la translit\u00e9ration de l'arabizi alg\u00e9rien : une\u00e9tude pr\u00e9liminaire",
"authors": [
{
"first": "Imane",
"middle": [],
"last": "Guellil",
"suffix": ""
},
{
"first": "Faical",
"middle": [],
"last": "Azouaou",
"suffix": ""
},
{
"first": "Fodil",
"middle": [],
"last": "Benali",
"suffix": ""
},
{
"first": "Houda",
"middle": [],
"last": "Hachani",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Saadane",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference: 25e conf\u00e9rence sur le Traitement Automatique des Langues Naturelles (TALN)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imane Guellil, Faical Azouaou, Fodil Benali, ala-eddine Hachani, and Houda Saadane. 2018c. Approche hybride pour la translit\u00e9ration de l'arabizi alg\u00e9rien : une\u00e9tude pr\u00e9liminaire. In Conference: 25e conf\u00e9rence sur le Traite- ment Automatique des Langues Naturelles (TALN), May 2018, Rennes, FranceAt: Rennes, France. https://www.researchgate.net/ publication/326354578_Approche_Hybride_ pour_la_transliteration_de_l%27arabizi_ algerien_une_etude_preliminaire.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Arautosenti: Automatic annotation and new tendencies for sentiment classification of arabic messages",
"authors": [
{
"first": "Imane",
"middle": [],
"last": "Guellil",
"suffix": ""
},
{
"first": "Faical",
"middle": [],
"last": "Azouaou",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Chiclana",
"suffix": ""
}
],
"year": 2020,
"venue": "Social Network Analysis and Mining",
"volume": "10",
"issue": "1",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imane Guellil, Faical Azouaou, and Francisco Chi- clana. 2020b. Arautosenti: Automatic annotation and new tendencies for sentiment classification of arabic messages. Social Network Analysis and Min- ing, 10(1):1-20.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Une approche fond\u00e9e sur les lexiques d'analyse de sentiments du dialecte alg\u00e9rien",
"authors": [
{
"first": "Imane",
"middle": [],
"last": "Guellil",
"suffix": ""
},
{
"first": "Faical",
"middle": [],
"last": "Azouaou",
"suffix": ""
},
{
"first": "Houda",
"middle": [],
"last": "Sa\u00e2dane",
"suffix": ""
},
{
"first": "Nasredine",
"middle": [],
"last": "Semmar",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imane Guellil, Faical Azouaou, Houda Sa\u00e2dane, and Nasredine Semmar. 2017. Une approche fond\u00e9e sur les lexiques d'analyse de sentiments du dialecte alg\u00e9rien.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "English vs arabic sentiment analysis: A survey presenting 100 work studies, resources and tools",
"authors": [
{
"first": "Imane",
"middle": [],
"last": "Guellil",
"suffix": ""
},
{
"first": "Faical",
"middle": [],
"last": "Azouaou",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Valitutti",
"suffix": ""
}
],
"year": 2019,
"venue": "2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA)",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imane Guellil, Faical Azouaou, and Alessandro Valitutti. 2019a. English vs arabic sentiment anal- ysis: A survey presenting 100 work studies, re- sources and tools. In 2019 IEEE/ACS 16th Interna- tional Conference on Computer Systems and Applica- tions (AICCSA), pages 1-8. IEEE.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Arabic natural language processing: An overview",
"authors": [
{
"first": "Imane",
"middle": [],
"last": "Guellil",
"suffix": ""
},
{
"first": "Houda",
"middle": [],
"last": "Sa\u00e2dane",
"suffix": ""
},
{
"first": "Faical",
"middle": [],
"last": "Azouaou",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Imane Guellil, Houda Sa\u00e2dane, Faical Azouaou, Billel Gueni, and Damien Nouvel. 2019b. Arabic natural language processing: An overview. Jour- nal of King Saud University-Computer and Informa- tion Sciences.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Introduction to arabic natural language processing",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Nizar",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2010,
"venue": "Synthesis Lectures on Human Language Technologies",
"volume": "3",
"issue": "1",
"pages": "1--187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Y Habash. 2010. Introduction to arabic natu- ral language processing. Synthesis Lectures on Hu- man Language Technologies, 3(1):1-187.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Maghrebi arabic dialect processing: an overview",
"authors": [
{
"first": "Salima",
"middle": [],
"last": "Harrat",
"suffix": ""
},
{
"first": "Karima",
"middle": [],
"last": "Meftouh",
"suffix": ""
},
{
"first": "Kamel",
"middle": [],
"last": "Sma\u00efli",
"suffix": ""
}
],
"year": 2017,
"venue": "ICNLSSP 2017-International Conference on Natural Language, Signal and Speech Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salima Harrat, Karima Meftouh, and Kamel Sma\u00efli. 2017. Maghrebi arabic dialect processing: an overview. In ICNLSSP 2017-International Con- ference on Natural Language, Signal and Speech Pro- cessing.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Exploiting emoticons in sentiment analysis",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Hogenboom",
"suffix": ""
},
{
"first": "Daniella",
"middle": [],
"last": "Bal",
"suffix": ""
},
{
"first": "Flavius",
"middle": [],
"last": "Frasincar",
"suffix": ""
},
{
"first": "Malissa",
"middle": [],
"last": "Bal",
"suffix": ""
},
{
"first": "Franciska",
"middle": [],
"last": "De",
"suffix": ""
},
{
"first": "Jong",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Uzay",
"middle": [],
"last": "Kaymak",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 28th Annual ACM Symposium on Applied Computing",
"volume": "",
"issue": "",
"pages": "703--710",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Hogenboom, Daniella Bal, Flavius Frasincar, Malissa Bal, Franciska de Jong, and Uzay Kaymak. 2013. Exploiting emoticons in sen- timent analysis. In Proceedings of the 28th Annual ACM Symposium on Applied Computing, pages 703- 710. ACM.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A set of parameters for automatically annotating a sentiment arabic corpus",
"authors": [
{
"first": "Guellil",
"middle": [],
"last": "Imane",
"suffix": ""
},
{
"first": "Darwish",
"middle": [],
"last": "Kareem",
"suffix": ""
},
{
"first": "Azouaou",
"middle": [],
"last": "Faical",
"suffix": ""
}
],
"year": 2019,
"venue": "International Journal of Web Information Systems",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guellil Imane, Darwish Kareem, and Azouaou Faical. 2019. A set of parameters for automati- cally annotating a sentiment arabic corpus. Inter- national Journal of Web Information Systems.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Stemming arabic text",
"authors": [
{
"first": "Shereen",
"middle": [],
"last": "Khoja",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Garside",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shereen Khoja and Roger Garside. 1999. Stem- ming arabic text. Lancaster, UK, Computing Depart- ment, Lancaster University.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A proposed lexicon-based sentiment analysis approach for the vernacular algerian arabic",
"authors": [
{
"first": "M'hamed",
"middle": [],
"last": "Mataoui",
"suffix": ""
},
{
"first": "Omar",
"middle": [],
"last": "Zelmati",
"suffix": ""
},
{
"first": "Madiha",
"middle": [],
"last": "Boumechache",
"suffix": ""
}
],
"year": 2016,
"venue": "Research in Computing Science",
"volume": "110",
"issue": "",
"pages": "55--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M'hamed Mataoui, Omar Zelmati, and Madiha Boumechache. 2016. A proposed lexicon-based sentiment analysis approach for the vernacular algerian arabic. Research in Computing Science, 110:55-70.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "An arabizi-english social media statistical machine translation system",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Yassine",
"middle": [],
"last": "Benjira",
"suffix": ""
},
{
"first": "Abdessamad",
"middle": [],
"last": "Echihabi",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 11th Conference of the Association for Machine Translation in the Americas",
"volume": "",
"issue": "",
"pages": "329--341",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan May, Yassine Benjira, and Abdessamad Echihabi. 2014. An arabizi-english social media statistical machine translation system. In Proceed- ings of the 11th Conference of the Association for Ma- chine Translation in the Americas, pages 329-341.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Sentiment analysis of tunisian dialects: Linguistic ressources and experiments",
"authors": [
{
"first": "Salima",
"middle": [],
"last": "Medhaffar",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Third Arabic Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "55--61",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Salima Medhaffar, Fethi Bougares, Yannick Es- teve, and Lamia Hadrich-Belguith. 2017. Sen- timent analysis of tunisian dialects: Linguistic ressources and experiments. In Proceedings of the Third Arabic Natural Language Processing Workshop, pages 55-61.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Sentiment lexicons for arabic social media",
"authors": [
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2016,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif Mohammad, Mohammad Salameh, and Svet- lana Kiritchenko. 2016a. Sentiment lexicons for arabic social media. In LREC.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "How translation alters sentiment",
"authors": [
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2016,
"venue": "Journal of Artificial Intelligence Research",
"volume": "55",
"issue": "",
"pages": "95--130",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saif M Mohammad, Mohammad Salameh, and Svetlana Kiritchenko. 2016b. How translation al- ters sentiment. Journal of Artificial Intelligence Re- search, 55:95-130.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Subjectivity and sentiment analysis of modern standard arabic and arabic microblogs",
"authors": [
{
"first": "Ahmed",
"middle": [],
"last": "Mourad",
"suffix": ""
},
{
"first": "Kareem",
"middle": [],
"last": "Darwish",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 4th workshop on computational approaches to subjectivity, sentiment and social media analysis",
"volume": "",
"issue": "",
"pages": "55--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ahmed Mourad and Kareem Darwish. 2013. Sub- jectivity and sentiment analysis of modern stan- dard arabic and arabic microblogs. In Proceed- ings of the 4th workshop on computational approaches to subjectivity, sentiment and social media analysis, pages 55-64.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Astd: Arabic sentiment tweets dataset",
"authors": [
{
"first": "Mahmoud",
"middle": [],
"last": "Nabil",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Aly",
"suffix": ""
},
{
"first": "Amir",
"middle": [],
"last": "Atiya",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2515--2519",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mahmoud Nabil, Mohamed Aly, and Amir Atiya. 2015. Astd: Arabic sentiment tweets dataset. In Proceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 2515- 2519.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Twitter as a corpus for sentiment analysis and opinion mining",
"authors": [
{
"first": "Alexander",
"middle": [],
"last": "Pak",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Paroubek",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander Pak and Patrick Paroubek. 2010. Twit- ter as a corpus for sentiment analysis and opinion mining.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Deep contextualized word representations",
"authors": [
{
"first": "E",
"middle": [],
"last": "Matthew",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Peters",
"suffix": ""
},
{
"first": "Mohit",
"middle": [],
"last": "Neumann",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Iyyer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.05365"
]
},
"num": null,
"urls": [],
"raw_text": "Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contex- tualized word representations. arXiv preprint arXiv:1802.05365.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Sana: Sentiment analysis on newspapers comments in algeria",
"authors": [
{
"first": "Hichem",
"middle": [],
"last": "Rahab",
"suffix": ""
},
{
"first": "Abdelhafid",
"middle": [],
"last": "Zitouni",
"suffix": ""
},
{
"first": "Mahieddine",
"middle": [],
"last": "Djoudi",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hichem Rahab, Abdelhafid Zitouni, and Mahied- dine Djoudi. 2019. Sana: Sentiment analysis on newspapers comments in algeria. Journal of King Saud University-Computer and Information Sciences.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Sentiment after translation: A case-study on arabic social media posts",
"authors": [
{
"first": "Mohammad",
"middle": [],
"last": "Salameh",
"suffix": ""
},
{
"first": "Saif",
"middle": [],
"last": "Mohammad",
"suffix": ""
},
{
"first": "Svetlana",
"middle": [],
"last": "Kiritchenko",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 conference of the North American chapter of the association for computational linguistics: Human language technologies",
"volume": "",
"issue": "",
"pages": "767--777",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohammad Salameh, Saif Mohammad, and Svet- lana Kiritchenko. 2015. Sentiment after transla- tion: A case-study on arabic social media posts. In Proceedings of the 2015 conference of the North Amer- ican chapter of the association for computational lin- guistics: Human language technologies, pages 767- 777.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A simple but effective approach to improve arabizi-to-english statistical machine translation",
"authors": [
{
"first": "Arianna",
"middle": [],
"last": "Marlies Van Der Wees",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Bisazza",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)",
"volume": "",
"issue": "",
"pages": "43--50",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marlies van der Wees, Arianna Bisazza, and Christof Monz. 2016. A simple but effective approach to improve arabizi-to-english statistical machine translation. In Proceedings of the 2nd Work- shop on Noisy User-generated Text (WNUT), pages 43-50.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Sentireview: Sentiment analysis based on text and emoticons",
"authors": [
{
"first": "Payal",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "Dhatri",
"middle": [],
"last": "Pandya",
"suffix": ""
}
],
"year": 2017,
"venue": "Innovative Mechanisms for Industry Applications (ICIMIA",
"volume": "",
"issue": "",
"pages": "467--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Payal Yadav and Dhatri Pandya. 2017. Sentire- view: Sentiment analysis based on text and emoti- cons. In Innovative Mechanisms for Industry Appli- cations (ICIMIA), 2017 International Conference on, pages 467-472. IEEE.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Results of ASTD/QCRI/ArTwitter",
"num": null
},
"TABREF0": {
"content": "<table/>",
"text": "Sentiment analysis of Arabic/ Arabizi messages Input: Eng lex : English lexicon, ArTest corp [] : List of Arabic sentiment corpora, ArabiziTest corp [] : List of Arabizi sentiment corpora, ArabiziTrTest corp [] : List of Arabizi transliterated sentiment corpora, Facebook Key : A key for accessing RestFB API Output:",
"num": null,
"type_str": "table",
"html": null
},
"TABREF2": {
"content": "<table><tr><td colspan=\"2\">: Deep learning models architecture</td></tr><tr><td>80.99). However, the results obtained using</td><td>classifier (F1= 75.23). For deep learning</td></tr><tr><td>Word2vec model combined with shallow</td><td>classification, the combination of FastText,</td></tr><tr><td>classifiers outperform those obtained using</td><td>CBOW and CNN gives the best results for</td></tr><tr><td>FastText model combined with deep learning</td><td>the corpus Senti_Alg_test_trauto (F1-score=</td></tr><tr><td>classifiers. It can also be seen from this</td><td>69.78</td></tr><tr><td>Table that CBOW model results generally</td><td/></tr><tr><td>outperform the results returned by using the</td><td/></tr><tr><td>SG model. More details are presented (in the</td><td/></tr><tr><td>Appendices, section 7) in the Table 1)</td><td/></tr><tr><td>Results on the Arabizi side of Senti Alg</td><td/></tr><tr><td>(Senti Alg test Arabizi) obtained on the</td><td/></tr><tr><td>Arabizi side of Senti Alg, that we named</td><td/></tr><tr><td>Senti_Alg_test_Arabizi. However, as our</td><td/></tr><tr><td>language model and training corpus is in Ara-</td><td/></tr><tr><td>bic script, the corpus Senti_Alg_test_Arabizi</td><td/></tr><tr><td>was firstly transliterated. For showing the</td><td/></tr><tr><td>efficiency of our transliteration system, we</td><td/></tr><tr><td>transliterate this corpus in both ways, au-</td><td/></tr><tr><td>tomatically (for obtaining Senti_Alg_test_</td><td/></tr><tr><td>trauto) and manually (for obtaining Senti_</td><td/></tr><tr><td>Alg_test_trmanu). The best results for the</td><td/></tr><tr><td>corpus Senti_Alg_test_trauto were obtained</td><td/></tr><tr><td>using SG of Word2vec combined with SGD</td><td/></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF3": {
"content": "<table><tr><td>For</td></tr></table>",
"text": "Results",
"num": null,
"type_str": "table",
"html": null
},
"TABREF4": {
"content": "<table><tr><td>deep learning classification, the combination</td></tr><tr><td>of FastText, CBOW and LSTM gives the best</td></tr><tr><td>results (F1-score= 63.29). However, the results</td></tr><tr><td>obtained using Word2vec model combined</td></tr><tr><td>with shallow classifiers outperform those ob-</td></tr><tr><td>tained using FastText model combined with</td></tr><tr><td>deep learning classifiers. The CBOW model</td></tr><tr><td>results generally outperform the results re-</td></tr><tr><td>turned by using the SG model. More details</td></tr><tr><td>are presented (in the Appendices, section 7) in</td></tr><tr><td>the</td></tr></table>",
"text": "Synthesis of the best obtained results",
"num": null,
"type_str": "table",
"html": null
},
"TABREF5": {
"content": "<table><tr><td>into two parts: the first one</td></tr><tr><td>illustrates the sentiment classification results</td></tr><tr><td>obtained on TSAC test and the second one,</td></tr><tr><td>the results obtained on TSAC Test Tr. For the</td></tr><tr><td>experiments done on both corpora, it can be</td></tr><tr><td>seen that the best results were obtained using</td></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF6": {
"content": "<table><tr><td>the</td></tr></table>",
"text": "",
"num": null,
"type_str": "table",
"html": null
},
"TABREF7": {
"content": "<table><tr><td>matically, F1 was up to 73.69 (for TSAC Test)</td></tr><tr><td>and up to 75.24 (for TSAC Test TR). By us-</td></tr><tr><td>ing the manually reviewed corpus, F1 is up</td></tr><tr><td>to 75.61 (for TSAC Test) and up to 80.69</td></tr><tr><td>(for TSAC Tr</td></tr></table>",
"text": "Synthesis of the best-obtained results on the manually reviewed corpus",
"num": null,
"type_str": "table",
"html": null
},
"TABREF8": {
"content": "<table><tr><td colspan=\"5\">or ELMO(Peters ML Algo Senti Alg test trauto Senti Alg test trmanu et al., 2018). Type ML Algo Arabic P R F1 GNB 93.50 74.80 83.11 LR 82.09 88.00 84.94 CBOW RF 85.07 75.20 79.83 SGD 85.28 90.40 87.77 LSVC 82.71 88.00 85.27 GNB 90.34 74.80 81.84 LR 85.10 86.80 85.94 SG RF 85.59 76.00 80.51 SGD 84.62 88.00 86.27 LSVC 85.32 86.00 85.66 CNN 80.03 80.00 79.99 CBOW MLP 81.04 81.00 80.99 LSTM 79.65 79.60 79.59 Bi-LSTM 79.92 79.60 79.54 CNN 78.44 78.20 78.15 MLP 79.63 79.60 79.59 SG LSTM 79.04 79.00 78.99 Bi-LSTM 76.84 76.80 76.79 Results on the Arabic side of Senti Alg (Senti Alg test Arabic) Model Word2vec FastText Figure 1: Model Type P R F1 P R F1 GNB 82.18 57.20 67.45 85.47 58.80 69.67 LR 69.81 74.00 71.84 75.00 76.80 75.89 CBOW RF 73.06 64.00 68.23 72.20 64.40 Model Type ML Algo SANA Alg P R F1 GNB 81.17 80.83 81.00 LR 76.23 70.83 73.43 CBOW RF 71.54 77.50 74.40 SGD 80.28 72.92 76.42 LSVC 74.44 69.17 71.71 Word2vec GNB 62.37 96.67 75.82 LR 79.09 72.50 75.65 SG RF 72.05 76.25 74.09 SGD 82.74 67.92 74.60 LSVC 78.80 71.25 74.84 CNN 62.26 62.30 62.28 CBOW MLP 59.65 60.00 59.64 LSTM 63.42 63.22 63.29 Bi-LSTM 62.08 62.07 60.29 FastText CNN 60.76 61.15 60.22 MLP 57.57 57.93 57.62 SG LSTM 60.37 60.23 60.29 Bi-LSTM 59.28 59.77 58.87 Figure 3: Results on SANA Alg Model Type ML Algo TSAC Test TSAC Test Tr P R F1 P R F1 GNB Type ML Algo TSAC Test TSAC Test Tr P R F1 P R F1 GNB 78.65 Model Type ML Algo Arabic P R F1 GNB 76.11 85.61 80.58 LR 71.93 75.31 73.58 CBOW RF 69.79 68.00 69.28 80.31 Model SGD 77.65 73.45 75.49 68.08 SGD 69.10 79.60 73.98 73.71 74.00 LSVC 70.90 74.66 72.73 73.85 LSVC 70.04 72.00 71.01 75.70 76.00 75.85 Word2vec</td></tr><tr><td colspan=\"2\">Word2vec</td><td/><td/></tr><tr><td/><td/><td/><td>GNB</td><td>66.33 92.59 77.29</td></tr><tr><td/><td/><td>GNB</td><td colspan=\"2\">79.50 63.60 70.67 85.41 63.20 LR 71.67 77.64 74.54</td><td>72.64</td></tr><tr><td/><td>SG CBOW</td><td>LR RF SG SGD LSVC CNN MLP CBOW</td><td colspan=\"2\">68.40 73.60 70.91 73.08 76.00 72.29 66.80 69.44 76.47 67.60 RF 71.09 67.12 69.05 69.49 82.00 75.23 75.10 77.20 SGD 71.12 82.58 76.42 69.08 72.40 70.70 72.76 74.80 LSVC 71.40 77.08 74.13 69.85 69.80 69.78 73.65 73.60 CNN 64.24 64.11 64.03 67.64 67.60 67.58 71.81 71.80 MLP 62.65 62.65 62.65</td><td>74.51 71.76 76.13 73.77 73.58 71.80</td></tr><tr><td/><td/><td>LSTM</td><td colspan=\"2\">68.69 68.60 68.56 70.01 70.00 LSTM 61.40 61.09 60.81</td><td>70.00</td></tr><tr><td/><td/><td colspan=\"3\">Bi-LSTM 68.95 68.80 68.74 71.93 71.40 Bi-LSTM 62.97 62.88 62.81</td><td>71.22</td></tr><tr><td>FastText</td><td>FastText</td><td/><td/></tr><tr><td/><td>SG</td><td colspan=\"3\">CNN MLP LSTM Bi-LSTM 68.84 68.80 68.78 70.60 70.60 68.52 68.20 68.06 73.29 72.60 68.25 68.20 68.18 71.37 71.20 CNN 63.30 63.27 63.26 69.42 69.40 69.39 72.60 72.60 MLP 60.83 60.81 60.78 SG LSTM 60.58 60.43 60.30 Bi-LSTM 60.48 60.41 60.34</td><td>72.40 71.14 72.60 70.60</td></tr><tr><td colspan=\"5\">Figure 2: Results on the Arabizi side of Senti Alg (Senti Alg test Arabizi) after translitera-</td></tr><tr><td>tion</td><td/><td/><td/></tr></table>",
"text": "36.71 50.38 81.25 65.76 72.69 LR 61.38 88.82 72.60 70.06 77.35 73.53 CBOW RF 58.75 78.82 67.32 70.01 64.41 67.10 SGD 61.81 89.29 73.05 71.72 77.88 74.68 LSVC 61.38 87.88 72.28 70.27 76.76 73.38 Word2vec GNB 60.92 89.76 72.58 71.51 76.76 74.04 LR 75.45 41.76 53.77 72.37 76.41 74.33 SG RF 59.15 77.59 67.12 67.28 60.35 63.63 SGD 62.02 90.76 73.69 71.44 79.47 75.24 LSVC 75.09 38.82 51.18 72.48 75.76 74.09 CNN 59.69 56.62 52.88 63.84 63.32 62.98 CBOW MLP 58.86 55.97 52.06 62.65 62.50 62.39 LSTM 59.12 55.79 51.36 63.01 62.21 61.61 Bi-LSTM 57.36 55.29 51.93 62.08 61.91 61.78 FastText CNN 57.70 55.79 52.88 61.45 61.38 61.33 MLP 55.87 54.21 50.71 62.21 62.15 62.10 SG LSTM 56.33 42.4 50.11 61.09 60.85 60.65 Bi-LSTM 57.25 54.88 50.87 61.36 61.15 60.96 Figure 4: Results on TSAC Test by using Senti Alg as training 32.29 45.79 82.39 57.24 67.55 LR 65.08 89.47 75.35 84.76 84.76 84.76 CBOW RF 62.51 83.76 71.59 87.13 78.82 82.77 SGD 64.34 92.12 75.76 82.55 89.06 85.68 LSVC 85.70 42.65 56.95 84.65 87.29 85.95 Word2vec GNB 76.39 31.59 44.69 82.70 56.53 67.16 LR 65.66 89.88 75.89 87.26 87.41 87.33 SG RF 83.45 33.82 48.14 87.53 78.88 82.98 SGD 65.46 89.65 75.67 83.76 91.65 87.53 LSVC 88.10 43.53 58.27 86.64 88.12 87.37 CNN 75.53 66.50 63.25 89.94 89.65 89.63 CBOW MLP 75.29 67.21 64.36 90.81 90.76 90.76 LSTM 75.71 67.41 64.55 91.52 91.41 91.41 Bi-LSTM 77.53 67.44 64.16 91.66 91.21 91.18 FastText CNN 75.85 66.85 63.69 91.58 91.47 91.46 MLP 76.13 67.50 64.57 91.65 91.59 91.59 SG LSTM 75.78 66.91 63.80 90.85 90.71 90.70 Bi-LSTM 77.10 65.59 61.50 91.39 91.03 91.01Figure 5: Results on TSAC Test using TSAC train Tr",
"num": null,
"type_str": "table",
"html": null
}
}
}
}