{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:33:36.007127Z" }, "title": "OdiEnCorp 2.0: Odia-English Parallel Corpus for Machine Translation", "authors": [ { "first": "Shantipriya", "middle": [], "last": "Parida", "suffix": "", "affiliation": { "laboratory": "", "institution": "Idiap Research Institute", "location": { "settlement": "Martigny", "country": "Switzerland" } }, "email": "shantipriya.parida@idiap.ch" }, { "first": "Satya", "middle": [], "last": "Ranjan Dash", "suffix": "", "affiliation": { "laboratory": "", "institution": "KIIT University", "location": { "settlement": "Bhubaneswar", "country": "India" } }, "email": "" }, { "first": "Ond\u0159ej", "middle": [], "last": "Bojar", "suffix": "", "affiliation": { "laboratory": "", "institution": "Charles University", "location": { "settlement": "Prague", "country": "Czech Republic" } }, "email": "bojar@ufal.mff.cuni.cz" }, { "first": "Petr", "middle": [], "last": "Motl\u00ed\u010dek", "suffix": "", "affiliation": { "laboratory": "", "institution": "Idiap Research Institute", "location": { "settlement": "Martigny", "country": "Switzerland" } }, "email": "petr.motlicek@idiap.ch" }, { "first": "Priyanka", "middle": [], "last": "Pattnaik", "suffix": "", "affiliation": { "laboratory": "", "institution": "KIIT University", "location": { "settlement": "Bhubaneswar", "country": "India" } }, "email": "priyankapattanaik2013@gmail.com" }, { "first": "Debasish", "middle": [], "last": "Kumar Mallick", "suffix": "", "affiliation": { "laboratory": "", "institution": "KIIT University", "location": { "settlement": "Bhubaneswar", "country": "India" } }, "email": "mdebasishkumar@gmail.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "The preparation of parallel corpora is a challenging task, particularly for languages that suffer from under-representation in the digital world. In a multilingual country like India, the need for such parallel corpora is stringent for several low-resource languages. In this work, we provide an extended English-Odia parallel corpus, OdiEnCorp 2.0, aiming particularly at Neural Machine Translation (NMT) systems which will help translate English\u2194Odia. OdiEnCorp 2.0 includes existing English-Odia corpora and we extended the collection by several other methods of data acquisition: parallel data scraping from many websites, including Odia Wikipedia, but also optical character recognition (OCR) to extract parallel data from scanned images. Our OCR-based data extraction approach for building a parallel corpus is suitable for other low resource languages that lack in online content. The resulting OdiEnCorp 2.0 contains 98,302 sentences and 1.69 million English and 1.47 million Odia tokens. To the best of our knowledge, OdiEnCorp 2.0 is the largest Odia-English parallel corpus covering different domains and available freely for non-commercial and research purposes.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "The preparation of parallel corpora is a challenging task, particularly for languages that suffer from under-representation in the digital world. In a multilingual country like India, the need for such parallel corpora is stringent for several low-resource languages. In this work, we provide an extended English-Odia parallel corpus, OdiEnCorp 2.0, aiming particularly at Neural Machine Translation (NMT) systems which will help translate English\u2194Odia. OdiEnCorp 2.0 includes existing English-Odia corpora and we extended the collection by several other methods of data acquisition: parallel data scraping from many websites, including Odia Wikipedia, but also optical character recognition (OCR) to extract parallel data from scanned images. Our OCR-based data extraction approach for building a parallel corpus is suitable for other low resource languages that lack in online content. The resulting OdiEnCorp 2.0 contains 98,302 sentences and 1.69 million English and 1.47 million Odia tokens. To the best of our knowledge, OdiEnCorp 2.0 is the largest Odia-English parallel corpus covering different domains and available freely for non-commercial and research purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Odia (also called Oriya) is an Indian language belonging to the Indo-Aryan branch of the Indo-European language family. It is the predominant language of the Indian state of Odisha. Odia is one of the 22 official languages and 14 regional languages of India. Odia is the sixth Indian language to be designated a Classical Language in India based on having a long literary history and not having borrowed extensively from other languages. 1 Odia is written in Odia script, which is a Brahmic script. Odia has its origins pinned to the 10th century. In the 16th and 17th centuries, as in the case of other Indian languages, Odia too suffered changes due to the influence of Sanskrit. 2 Odia is nowadays spoken by 50 million speakers. 3 It is heavily influenced by the Dravidian languages as well as Arabic, Persian, English. Odias inflectional morphology is rich with a three-tier tense system. The prototypical word order is subject-object-verb (SOV). In today's digital world, there has been a demand for machine translation systems for English\u2194Odia translation for a long time which couldn't have been fulfilled due to the lack of Odia resources, particularly a parallel corpus. Parallel corpora are of great importance in language studies, teaching and many natural language processing applications such as machine translation, cross-language information retrieval, 1 https://infogalactic.com/info/Odia_ language 2 https://www.indianmirror.com/languages/ odiya-language.html 3 https://www.britannica.com/topic/ Oriya-language word sense disambiguation, bilingual terminology extraction as well as induction of tools across languages. The Odia language is not available in many machine translation systems. Several researchers explored these goals, developing Odia resources and prototype machine translation systems but these are not available online and benefitting users (Das et al., 2018; Balabantaray and Sahoo, 2013; Rautaray et al., 2019) . We have analysed the available English-Odia parallel corpora (OdiEnCorp 1.0, PMIndia) and their performance (BLEU score) for machine translation (Parida et al., 2020; Haddow and Kirefu, 2020) . OdiEnCorp 1.0 contains Odia-English parallel and monolingual data. The statistics of OdiEnCorp 1.0 are shown in Table 1 . In OdiEnCorp 1.0, the parallel sentences are mostly derived from the English-Odia parallel Bible and the size of the parallel corpus (29K) is not sufficient for neural machine translation (NMT) as documented by the baseline results (Parida et al., 2020) as well as attempts at improving them using NMT techniques such as transfer learning (Kocmi and Bojar, 2019) . The recently released PMIndia corpus (Haddow and Kirefu, 2020) contains 38K English-Odia parallel sentences but it is mostly collected from the prime minister of India's official portal 4 containing text about government policies in 13 official languages of India. These points motivate us for building OdiEnCorp 2.0 with more data, covering various domains suitable for various tasks of language processing, but particularly for the building of an English\u2194Odia machine translation system which will be useful for the research community as well as general users for non-commercial purposes. ", "cite_spans": [ { "start": 732, "end": 733, "text": "3", "ref_id": null }, { "start": 1875, "end": 1893, "text": "(Das et al., 2018;", "ref_id": "BIBREF6" }, { "start": 1894, "end": 1923, "text": "Balabantaray and Sahoo, 2013;", "ref_id": "BIBREF5" }, { "start": 1924, "end": 1946, "text": "Rautaray et al., 2019)", "ref_id": "BIBREF18" }, { "start": 2094, "end": 2115, "text": "(Parida et al., 2020;", "ref_id": "BIBREF15" }, { "start": 2116, "end": 2140, "text": "Haddow and Kirefu, 2020)", "ref_id": "BIBREF9" }, { "start": 2497, "end": 2518, "text": "(Parida et al., 2020)", "ref_id": "BIBREF15" }, { "start": 2604, "end": 2627, "text": "(Kocmi and Bojar, 2019)", "ref_id": "BIBREF13" }, { "start": 2667, "end": 2692, "text": "(Haddow and Kirefu, 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 2255, "end": 2262, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "As there is a very limited number of online Odia resources available, we have explored several possible ways to collect Odia-English parallel data. Although these methods need a considerable amount of manual processing, we opted for them, to achieve the largest possible data size. In sum, we used these sources:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "2." }, { "text": "\u2022 Data extracted using OCR,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "2." }, { "text": "\u2022 Data extracted from Odia Wikipedia,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "2." }, { "text": "\u2022 Data extracted from other online resources,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "2." }, { "text": "\u2022 Data reused from existing corpora.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Sources", "sec_num": "2." }, { "text": "The overall process of the OdiEnCorp 2.0 is shown in Figure 1.", "cite_spans": [], "ref_spans": [ { "start": 53, "end": 59, "text": "Figure", "ref_id": null } ], "eq_spans": [], "section": "Data Sources", "sec_num": "2." }, { "text": "Many books are translated in more than one language and they could serve as a reliable source to obtain parallel sentences, but they are unfortunately not digitized (Bakliwal et al., 2016; Premjith et al., 2016) . OCR technology has improved substantially, which has allowed for large-scale digitization of textual resources such as books, old newspapers, ancient hand-written documents (Dhondt et al., 2017) . That said, it should be kept in mind that there are often mistakes in the scanned texts as OCR system occasionally misrecognizes letters or falsely identifies text regions, leading to misspellings and linguistics errors in the output text (Afli et al., 2016) . Odia language has a rich literary heritage and many books are available in printed form. We have explored books having either English and Odia parallel text together or books having both versions (English and Odia). We have used the study, translation, grammar, literature, and motivational books for this purpose, obtaining the source images either from the web, or directly scanning them ourselves. We start with the image containing the Odia language text represented in the RGB color space. For the Odia text recognition, we use the \"Tesseract OCR engine\" (Smith, 2007) with several improvements in the pre-processing phase. First, we move from the traditional method which converts RGB to grayscale by taking the simple average of the three channels. We convert the RGB image into a grayscale image by applying the luminosity method which also averages the values, but it takes a weighted average to account for human perception (Joshi, 2019) :", "cite_spans": [ { "start": 165, "end": 188, "text": "(Bakliwal et al., 2016;", "ref_id": "BIBREF4" }, { "start": 189, "end": 211, "text": "Premjith et al., 2016)", "ref_id": "BIBREF17" }, { "start": 387, "end": 408, "text": "(Dhondt et al., 2017)", "ref_id": "BIBREF7" }, { "start": 650, "end": 669, "text": "(Afli et al., 2016)", "ref_id": "BIBREF0" }, { "start": 1232, "end": 1245, "text": "(Smith, 2007)", "ref_id": "BIBREF20" }, { "start": 1606, "end": 1619, "text": "(Joshi, 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "OCR-Based Text Extraction", "sec_num": "2.1." }, { "text": "Grayscale = 0.299R + 0.587G + 0.114B (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OCR-Based Text Extraction", "sec_num": "2.1." }, { "text": "where R is the amount of red, G green and B blue color in a pixel.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OCR-Based Text Extraction", "sec_num": "2.1." }, { "text": "To change the image further to black and white only, \"Tesseract\" uses the traditional binarization algorithm called \"Otsu\". We use instead the \"Niblack and Sauvola threshold algorithm\" which we found to give better results. The advantage of the Niblack algorithm is that it slides a rectangular window through the image (Smith, 2007) . The center pixel threshold T is derived from the mean m and variance s values inside the window.", "cite_spans": [ { "start": 320, "end": 333, "text": "(Smith, 2007)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "OCR-Based Text Extraction", "sec_num": "2.1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T = m + k \u2022 s,", "eq_num": "(2)" } ], "section": "OCR-Based Text Extraction", "sec_num": "2.1." }, { "text": "where k is a constant set to 0.8. \"Niblack\" can create noise in some areas of the image, so we further improve it by including the \"Sauvola\" algorithm. Thus the modified formula is:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "OCR-Based Text Extraction", "sec_num": "2.1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T = m \u2022 1 \u2212 k \u2022 (1 \u2212 s R ) ,", "eq_num": "(3)" } ], "section": "OCR-Based Text Extraction", "sec_num": "2.1." }, { "text": "where R is the dynamics of standard deviation, a constant set to 128. This formula will not detect all the images of documents. So the normalized formula we have implemented is: However, some black pixels vanish during these processes which may lead to erroneous character recognition, so we use Dilation (Gaikwad and Mahender, 2016) to join areas which got accidentally disconnected, see Figure 3 for an illustration.", "cite_spans": [ { "start": 305, "end": 333, "text": "(Gaikwad and Mahender, 2016)", "ref_id": "BIBREF8" } ], "ref_spans": [ { "start": 389, "end": 397, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "OCR-Based Text Extraction", "sec_num": "2.1." }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "T = m \u2212 k \u2022 (1 \u2212 s R ) \u2022 (m \u2212 M ),", "eq_num": "(4)" } ], "section": "OCR-Based Text Extraction", "sec_num": "2.1." }, { "text": "Because dilation sometimes produces too many black pixels, we further apply \"Erosion\" (Alginahi, 2010) as illustrated in Figure 4 . Finally, Figure 2 illustrates a sample of the scanned image containing parallel Odia-English data and the extracted text.", "cite_spans": [ { "start": 86, "end": 102, "text": "(Alginahi, 2010)", "ref_id": "BIBREF1" } ], "ref_spans": [ { "start": 121, "end": 129, "text": "Figure 4", "ref_id": "FIGREF4" }, { "start": 141, "end": 149, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "OCR-Based Text Extraction", "sec_num": "2.1." }, { "text": "The Odia Wikipedia started in 2002 and serves as a good source for Odia-English parallel data. The following steps were performed to obtain parallel sentences from Odia Wikipedia, with more details provided in the sections below: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Odia Wikipedia", "sec_num": "2.2." }, { "text": "Finding potential parallel texts in a collection of web documents is a challenging task, see e.g. (Antonova and Misyurev, 2011; K\u00fadela et al., 2017; Schwenk, 2018; Artetxe and Schwenk, 2019) . We have explored websites and prepared a list of such websites which are potential for us to collect Odia-English parallel data. The websites were then crawled with a simple Python script. We found Odisha's government portals of each district (e.g. Nayagarh district 5 ) of Odisha containing general information about the district in both English and Odia version.", "cite_spans": [ { "start": 98, "end": 127, "text": "(Antonova and Misyurev, 2011;", "ref_id": "BIBREF2" }, { "start": 128, "end": 148, "text": "K\u00fadela et al., 2017;", "ref_id": "BIBREF14" }, { "start": 149, "end": 163, "text": "Schwenk, 2018;", "ref_id": "BIBREF19" }, { "start": 164, "end": 190, "text": "Artetxe and Schwenk, 2019)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Additional Online Resources", "sec_num": "2.3." }, { "text": "Analyzing extracted text, we found a few cases where English was repeated in both sides of the website. We have aligned the extracted text manually to obtain the parallel text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Additional Online Resources", "sec_num": "2.3." }, { "text": "We also extracted parallel data from the Odia digital library \"Odia Virtual Academy\", 6 an Odisha government-initiated portal to store treasures of Odia language and literature for seamless access to Odia people staying across the globe. The web page provides tri-lingual books (tribal dictionary 7 containing common words and their translations in English and Odia) and we extracted the English-Odia sentence pairs from it.", "cite_spans": [ { "start": 86, "end": 87, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Additional Online Resources", "sec_num": "2.3." }, { "text": "Finally, we included parallel data from OdiEnCorp 1.0 and PMIndia (Parida et al., 2020; Haddow and Kirefu, 2020) . Both corpora contain pre-processed English-Odia parallel sentences. The statistics of these corpora are available in Table 2 .", "cite_spans": [ { "start": 66, "end": 87, "text": "(Parida et al., 2020;", "ref_id": "BIBREF15" }, { "start": 88, "end": 112, "text": "Haddow and Kirefu, 2020)", "ref_id": "BIBREF9" } ], "ref_spans": [ { "start": 232, "end": 239, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Reusing Available Corpora", "sec_num": "2.4." }, { "text": "The data collected from different sources were processed to achieve a unified format.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Processing", "sec_num": "3." }, { "text": "When utilizing online resources, we used a Python script to scrape plain text from HTML pages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extraction of Plain Text", "sec_num": "3.1." }, { "text": "After analyzing the raw data extracted using the OCR-based approach, we found a few errors such as unnecessary characters and symbols, missing words, etc. One of the reasons of poor OCR performance was the fact that some images were taken using mobile phones. In later processing, we always used a proper scanner. We decided for manual correction of such entries by volunteers whose mother tongue is Odia. Although this task is time-consuming and tedious, the result should be of considerably better quality and much more suitable for machine translation and other NLP tasks. Four volunteers worked part-time (2-3 hours daily) for four months on scanning of books, extracting data from the scanned images using OCR techniques, collecting data from online as well as offline sources, and post-editing all the data collected from different sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Manual Processing", "sec_num": "3.2." }, { "text": "All sources that come in paragraphs (e.g. Wikipedia articles or books) had to be segmented into sentences. We considered full stop (.) as of the end of the sentence for English and Odia Danda or Purnaviram (|) as of the end of the sentence for Odia language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Segmentation", "sec_num": "3.3." }, { "text": "For some sources, the alignment between English and Odia sentences was straightforward. Sources like Odia Wikipedia posed a bigger challenge, because the texts in the two languages are often created or edited independently of each other. To achieve the best possible parallel corpus, we relied on manual sentence alignment. In this process, we had to truncate or remove several few sentences in either of the languages in order to reach exactly 1-1 aligned English-Odia sentence pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sentence Alignment", "sec_num": "3.4." }, { "text": "The resulting corpus OdiEnCorp 2.0 covers a wide variety of domains, esp. compared to similar corpora. Our corpus covers the bible, literature, government policies, daily usage, learning, general domain (Wikipedia). ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Coverage", "sec_num": "3.5." }, { "text": "The composition of OdiEnCorp 2.0 with statistics for individual sources is provided in Table 2 . The release designates which parts of the corpus should be used for training, which for development and which for final testing. This division of OdiEnCorp 2.0 respects the dev and test sets of OdiEnCorp 1.0, so that models trained on v.2.0 training set can be directly tested on the older v.1.0 dev and test sets.", "cite_spans": [], "ref_spans": [ { "start": 87, "end": 94, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Final Data Sizes", "sec_num": "4." }, { "text": "For future reference, we provide a very baseline experiment with neural machine translation using OdiEnCorp 2.0 data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baseline Neural Machine Translation", "sec_num": "5." }, { "text": "For the purpose of NMT training, we removed duplicated sentence pairs and shuffled the segments The training, dev and test set sizes after this processing are shown in Table 3 .", "cite_spans": [], "ref_spans": [ { "start": 168, "end": 175, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Dataset Description", "sec_num": "5.1." }, { "text": "We used the Transformer model (Vaswani et al., 2018) as implemented in OpenNMT-py (Klein et al., 2017) . 8 Subword units were constructed using the word pieces algorithm (Johnson et al., 2017) . Tokenization is handled automatically as part of the pre-processing pipeline of word pieces. We generated the vocabulary of 32k sub-word types jointly for both the source and target languages, sharing it between the encoder and decoder. To train the model, we used a single GPU and followed the standard \"Noam\" learning rate decay, 9 see (Vaswani et al., 2017) or (Popel and Bojar, 2018) for more details. Our starting learning rate was 0.2 and we used 8000 warm-up steps. The learning curves are shown in ", "cite_spans": [ { "start": 30, "end": 52, "text": "(Vaswani et al., 2018)", "ref_id": "BIBREF22" }, { "start": 82, "end": 102, "text": "(Klein et al., 2017)", "ref_id": "BIBREF12" }, { "start": 170, "end": 192, "text": "(Johnson et al., 2017)", "ref_id": "BIBREF10" }, { "start": 533, "end": 555, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF21" }, { "start": 559, "end": 582, "text": "(Popel and Bojar, 2018)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Neural Machine Translation Setup", "sec_num": "5.2." }, { "text": "We use sacreBLEU 10,11 for estimating translation quality. Based on the Dev 2.0 best score, we select the model at iteration 40k for EN\u2192OD and at 30k for OD\u2192EN to obtain the final test set scores. Table 4 reports the performance on the Dev and Test sets of OdiEnCorp 2.0. Table 5 uses the Dev and Test sets belonging to OdiEnCorp 1.0. The results in Table 5 thus allow us to observe the gains compared to the scores reported in Parida et al. (2020) .", "cite_spans": [ { "start": 428, "end": 448, "text": "Parida et al. (2020)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 197, "end": 204, "text": "Table 4", "ref_id": "TABREF6" }, { "start": 272, "end": 279, "text": "Table 5", "ref_id": null }, { "start": 350, "end": 357, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "Results", "sec_num": "5.3." }, { "text": "OdiEnCorp 2.0 is available for research and noncommercial use under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, CC-BY-NC-SA 12 at:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Availability", "sec_num": "6." }, { "text": "http://hdl.handle.net/11234/1-3211", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Availability", "sec_num": "6." }, { "text": "We presented OdiEnCorp 2.0, an updated version of Odia-English parallel corpus aimed for linguistic research and applications in natural language processing, primarily machine translation. The corpus will be used for low resource machine translation shared tasks. The first such task is WAT 2020 13 Indic shared task on Odia\u2194English machine translation. Our plans for future include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7." }, { "text": "\u2022 Extending OdiEnCorp 2.0 with more parallel data, again by finding various new sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7." }, { "text": "\u2022 Building an English\u2194Odia translation system utilizing the developed OdiEnCorp 2.0 corpus and other techniques (back translation, domain adaptation) and releasing it to users for non-commercial purposes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7." }, { "text": "Task Dev 1.0 Test 1.0 OdiEnCorp 1.0 EN-OD 4.3 4.1 OdiEnCorp 2.0 EN-OD 4.9 4.0 OdiEnCorp 1.0 OD-EN 9.4 8.6 OdiEnCorp 2.0 OD-EN 12.0 9.3 Table 5 : Scores on Dev and Test sets of OdiEnCorp 1.0 for the baseline NMT models trained on OdiEnCorp 1.0 vs. OdiEnCorp 2.0.", "cite_spans": [], "ref_spans": [ { "start": 135, "end": 142, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "sacreBLEU Training Corpus", "sec_num": null }, { "text": "\u2022 Promoting the corpus in other reputed machine translation campaigns focusing on low resource languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "sacreBLEU Training Corpus", "sec_num": null }, { "text": "https://www.pmindia.gov.in/en/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": ". Clean the text by removing references, URLs, instructions, or any unnecessary contents.3. Segment articles into sentences, relying on English/Odia full stop mark.4. Align sentences between Odia and English.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://nayagarh.nic.in 6 https://ova.gov.in/en/ 7 https://ova.gov.in/de/ odisha-tribal-dictionary-and-language/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The work was supported by an innovation project (under an Inno-Suisse grant) oriented to improve the automatic speech recognition and natural language understanding technologies for German (Title: SM2: Extracting Semantic Meaning from Spoken Material funding application no. 29814.1 IP-ICT). In part, it was also supported by the EU H2020 project \"Real-time network, text, and speaker analytics for combating organized crime\" (ROXANNE, grant agreement: 833635) and by the grant 18-24210S of the Czech Science Foundation. This work has been using language resources and tools stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (projects LM2015071 and OP VVV VI CZ.02.1.01/0.0/0.0/16 013/0001781).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "8." } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Ocr error correction using statistical machine translation", "authors": [ { "first": "H", "middle": [], "last": "Afli", "suffix": "" }, { "first": "L", "middle": [], "last": "Barrault", "suffix": "" }, { "first": "H", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2016, "venue": "Int. J. Comput. Linguistics Appl", "volume": "7", "issue": "1", "pages": "175--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Afli, H., Barrault, L., and Schwenk, H. (2016). Ocr error correction using statistical machine translation. Int. J. Comput. Linguistics Appl., 7(1):175-191.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Preprocessing techniques in character recognition", "authors": [ { "first": "Y", "middle": [], "last": "Alginahi", "suffix": "" } ], "year": 2010, "venue": "Character recognition", "volume": "1", "issue": "", "pages": "1--19", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alginahi, Y. (2010). Preprocessing techniques in character recognition. Character recognition, 1:1-19.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Building a webbased parallel corpus and filtering out machine-translated text", "authors": [ { "first": "A", "middle": [], "last": "Antonova", "suffix": "" }, { "first": "A", "middle": [], "last": "Misyurev", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web", "volume": "", "issue": "", "pages": "136--144", "other_ids": {}, "num": null, "urls": [], "raw_text": "Antonova, A. and Misyurev, A. (2011). Building a web- based parallel corpus and filtering out machine-translated text. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, pages 136-144. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond", "authors": [ { "first": "M", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "H", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Artetxe, M. and Schwenk, H. (2019). Massively multi- lingual sentence embeddings for zero-shot cross-lingual transfer and beyond. Transactions of the Association for Computational Linguistics, 7:597610, Mar.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Align me: A framework to generate parallel corpus using ocrs and bilingual dictionaries", "authors": [ { "first": "P", "middle": [], "last": "Bakliwal", "suffix": "" }, { "first": "V", "middle": [], "last": "Devadath", "suffix": "" }, { "first": "C", "middle": [], "last": "Jawahar", "suffix": "" } ], "year": 2016, "venue": "Proc. of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016)", "volume": "", "issue": "", "pages": "183--187", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bakliwal, P., Devadath, V., and Jawahar, C. (2016). Align me: A framework to generate parallel corpus using ocrs and bilingual dictionaries. In Proc. of the 6th Workshop on South and Southeast Asian Natural Language Process- ing (WSSANLP2016), pages 183-187.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "An experiment to create parallel corpora for odia", "authors": [ { "first": "R", "middle": [], "last": "Balabantaray", "suffix": "" }, { "first": "D", "middle": [], "last": "Sahoo", "suffix": "" } ], "year": 2013, "venue": "International Journal of Computer Applications", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Balabantaray, R. and Sahoo, D. (2013). An experiment to create parallel corpora for odia. International Journal of Computer Applications, 67(19).", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "A constructive machine translation system for english to odia translation", "authors": [ { "first": "A", "middle": [ "K" ], "last": "Das", "suffix": "" }, { "first": "M", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "A", "middle": [ "K" ], "last": "Dash", "suffix": "" }, { "first": "C", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "H", "middle": [], "last": "Das", "suffix": "" } ], "year": 2018, "venue": "2018 International Conference on Communication and Signal Processing (ICCSP)", "volume": "", "issue": "", "pages": "854--0857", "other_ids": {}, "num": null, "urls": [], "raw_text": "Das, A. K., Pradhan, M., Dash, A. K., Pradhan, C., and Das, H. (2018). A constructive machine translation system for english to odia translation. In 2018 International Confer- ence on Communication and Signal Processing (ICCSP), pages 0854-0857. IEEE.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Generating a training corpus for ocr post-correction using encoderdecoder model", "authors": [ { "first": "E", "middle": [], "last": "Dhondt", "suffix": "" }, { "first": "C", "middle": [], "last": "Grouin", "suffix": "" }, { "first": "B", "middle": [], "last": "Grau", "suffix": "" } ], "year": 2017, "venue": "Proc. of IJCNLP", "volume": "1", "issue": "", "pages": "1006--1014", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dhondt, E., Grouin, C., and Grau, B. (2017). Generating a training corpus for ocr post-correction using encoder- decoder model. In Proc. of IJCNLP (Volume 1: Long Papers), pages 1006-1014.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "A review paper on text summarization", "authors": [ { "first": "D", "middle": [ "K" ], "last": "Gaikwad", "suffix": "" }, { "first": "C", "middle": [ "N" ], "last": "Mahender", "suffix": "" } ], "year": 2016, "venue": "International Journal of Advanced Research in Computer and Communication Engineering", "volume": "5", "issue": "3", "pages": "154--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gaikwad, D. K. and Mahender, C. N. (2016). A review paper on text summarization. International Journal of Advanced Research in Computer and Communication Engineering, 5(3):154-160.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Pmindia-a collection of parallel corpora of languages of india", "authors": [ { "first": "B", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "F", "middle": [], "last": "Kirefu", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2001.09907" ] }, "num": null, "urls": [], "raw_text": "Haddow, B. and Kirefu, F. (2020). Pmindia-a collection of parallel corpora of languages of india. arXiv preprint arXiv:2001.09907.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [ { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "M", "middle": [], "last": "Schuster", "suffix": "" }, { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "M", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Z", "middle": [], "last": "Chen", "suffix": "" }, { "first": "N", "middle": [], "last": "Thorat", "suffix": "" }, { "first": "F", "middle": [], "last": "Vi\u00e9gas", "suffix": "" }, { "first": "M", "middle": [], "last": "Wattenberg", "suffix": "" }, { "first": "G", "middle": [], "last": "Corrado", "suffix": "" }, { "first": "M", "middle": [], "last": "Hughes", "suffix": "" }, { "first": "J", "middle": [], "last": "Dean", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "339--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Johnson, M., Schuster, M., Le, Q. V., Krikun, M., Wu, Y., Chen, Z., Thorat, N., Vi\u00e9gas, F., Wattenberg, M., Cor- rado, G., Hughes, M., and Dean, J. (2017). Google's multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Text image extraction and summarization", "authors": [ { "first": "N", "middle": [], "last": "Joshi", "suffix": "" } ], "year": 2019, "venue": "Asian Journal For Convergence In Technology (AJCT)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Joshi, N. (2019). Text image extraction and summarization. Asian Journal For Convergence In Technology (AJCT).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "OpenNMT: Open-source toolkit for neural machine translation", "authors": [ { "first": "G", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Y", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Y", "middle": [], "last": "Deng", "suffix": "" }, { "first": "J", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "A", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Klein, G., Kim, Y., Deng, Y., Senellart, J., and Rush, A. M. (2017). OpenNMT: Open-source toolkit for neural ma- chine translation. In Proc. ACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Transfer learning across languages from someone else's NMT model", "authors": [ { "first": "T", "middle": [], "last": "Kocmi", "suffix": "" }, { "first": "O", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1909.10955" ] }, "num": null, "urls": [], "raw_text": "Kocmi, T. and Bojar, O. (2019). Transfer learning across languages from someone else's NMT model. arXiv preprint arXiv:1909.10955.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Extracting parallel paragraphs from common crawl", "authors": [ { "first": "J", "middle": [], "last": "K\u00fadela", "suffix": "" }, { "first": "I", "middle": [], "last": "Holubov\u00e1", "suffix": "" }, { "first": "O", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2017, "venue": "The Prague Bulletin of Mathematical Linguistics", "volume": "", "issue": "107", "pages": "36--59", "other_ids": {}, "num": null, "urls": [], "raw_text": "K\u00fadela, J., Holubov\u00e1, I., and Bojar, O. (2017). Extracting parallel paragraphs from common crawl. The Prague Bulletin of Mathematical Linguistics, (107):36-59.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "OdiEnCorp: Odia-English and Odia-Only Corpus for Machine Translation", "authors": [ { "first": "S", "middle": [], "last": "Parida", "suffix": "" }, { "first": "O", "middle": [], "last": "Bojar", "suffix": "" }, { "first": "S", "middle": [ "R" ], "last": "Dash", "suffix": "" } ], "year": 2020, "venue": "Smart Intelligent Computing and Applications", "volume": "", "issue": "", "pages": "495--504", "other_ids": {}, "num": null, "urls": [], "raw_text": "Parida, S., Bojar, O., and Dash, S. R. (2020). OdiEnCorp: Odia-English and Odia-Only Corpus for Machine Trans- lation. In Smart Intelligent Computing and Applications, pages 495-504. Springer.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Training tips for the transformer model", "authors": [ { "first": "M", "middle": [], "last": "Popel", "suffix": "" }, { "first": "O", "middle": [], "last": "Bojar", "suffix": "" } ], "year": 2018, "venue": "The Prague Bulletin of Mathematical Linguistics", "volume": "110", "issue": "1", "pages": "43--70", "other_ids": {}, "num": null, "urls": [], "raw_text": "Popel, M. and Bojar, O. (2018). Training tips for the trans- former model. The Prague Bulletin of Mathematical Lin- guistics, 110(1):43-70.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A fast and efficient framework for creating parallel corpus", "authors": [ { "first": "B", "middle": [], "last": "Premjith", "suffix": "" }, { "first": "S", "middle": [ "S" ], "last": "Kumar", "suffix": "" }, { "first": "R", "middle": [], "last": "Shyam", "suffix": "" }, { "first": "M", "middle": [ "A" ], "last": "Kumar", "suffix": "" }, { "first": "K", "middle": [], "last": "Soman", "suffix": "" } ], "year": 2016, "venue": "Indian J. Sci. Technol", "volume": "9", "issue": "", "pages": "1--7", "other_ids": {}, "num": null, "urls": [], "raw_text": "Premjith, B., Kumar, S. S., Shyam, R., Kumar, M. A., and Soman, K. (2016). A fast and efficient framework for creating parallel corpus. Indian J. Sci. Technol, 9:1-7.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "A shallow parser-based hindi to odia machine translation system", "authors": [ { "first": "J", "middle": [], "last": "Rautaray", "suffix": "" }, { "first": "A", "middle": [], "last": "Hota", "suffix": "" }, { "first": "S", "middle": [ "S" ], "last": "Gochhayat", "suffix": "" } ], "year": 2019, "venue": "Computational Intelligence in Data Mining", "volume": "", "issue": "", "pages": "51--62", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rautaray, J., Hota, A., and Gochhayat, S. S. (2019). A shallow parser-based hindi to odia machine translation system. In Computational Intelligence in Data Mining, pages 51-62. Springer.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Filtering and mining parallel data in a joint multilingual space", "authors": [ { "first": "H", "middle": [], "last": "Schwenk", "suffix": "" } ], "year": 2018, "venue": "Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "228--234", "other_ids": {}, "num": null, "urls": [], "raw_text": "Schwenk, H. (2018). Filtering and mining parallel data in a joint multilingual space. In Proc. of ACL (Volume 2: Short Papers), pages 228-234. Association for Computa- tional Linguistics, July.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "An overview of the tesseract ocr engine", "authors": [ { "first": "R", "middle": [], "last": "Smith", "suffix": "" } ], "year": 2007, "venue": "Ninth International Conference on Document Analysis and Recognition", "volume": "2", "issue": "", "pages": "629--633", "other_ids": {}, "num": null, "urls": [], "raw_text": "Smith, R. (2007). An overview of the tesseract ocr engine. In Ninth International Conference on Document Analysis and Recognition (ICDAR 2007), volume 2, pages 629- 633. IEEE.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Attention is all you need", "authors": [ { "first": "A", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "N", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "N", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "J", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "L", "middle": [], "last": "Jones", "suffix": "" }, { "first": "A", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "I", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., and Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Informa- tion Processing Systems, pages 5998-6008.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Tensor2tensor for neural machine translation", "authors": [ { "first": "A", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "S", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "E", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "F", "middle": [], "last": "Chollet", "suffix": "" }, { "first": "A", "middle": [], "last": "Gomez", "suffix": "" }, { "first": "S", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "L", "middle": [], "last": "Jones", "suffix": "" }, { "first": "\u0141", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "N", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "N", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "R", "middle": [], "last": "Sepassi", "suffix": "" }, { "first": "N", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "J", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2018, "venue": "Proc. of AMTA", "volume": "1", "issue": "", "pages": "193--199", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vaswani, A., Bengio, S., Brevdo, E., Chollet, F., Gomez, A., Gouws, S., Jones, L., Kaiser, \u0141., Kalchbrenner, N., Parmar, N., Sepassi, R., Shazeer, N., and Uszkoreit, J. (2018). Tensor2tensor for neural machine translation. In Proc. of AMTA (Volume 1: Research Papers), pages 193- 199.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Block diagram of the Corpus building process. The parallel data collected from various sources (online/offline) and processed using both automatic and manual processing to build the final Corpus OdiEnCorp 2.0.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "(a) Sample scanned image of parallel (English-Odia) data. (b) Extracted parallel data.", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "An illustration of the scanned image containing parallel English-Odia data and extracted data.", "num": null, "type_str": "figure" }, "FIGREF3": { "uris": null, "text": "Figure 3: Dilation", "num": null, "type_str": "figure" }, "FIGREF4": { "uris": null, "text": "Erosionwhere R is the maximum standard deviation of all the windows and M is the gray level of the current image.", "num": null, "type_str": "figure" }, "FIGREF5": { "uris": null, "text": "Learning curve (EN\u2192OD)", "num": null, "type_str": "figure" }, "FIGREF6": { "uris": null, "text": "Figure 6, 8 http://opennmt.net/OpenNMT-py/quickstart. html 9 https://nvidia.github.io/OpenSeq2Seq/ html/api-docs/optimizers.html", "num": null, "type_str": "figure" }, "TABREF1": { "text": "Statistics of OdiEnCorp 1.0.", "type_str": "table", "content": "", "num": null, "html": null }, "TABREF3": { "text": "OdiEnCorp 2.0 parallel corpus details. Training, dev and test sets together.", "type_str": "table", "content": "
", "num": null, "html": null }, "TABREF5": { "text": "OdiEnCorp 2.0 processed for NMT experiments.", "type_str": "table", "content": "
5.5
sacreBLEU4.5 5Dev 2.0 Test 2.0
1234
Training Steps\u202210 4
", "num": null, "html": null }, "TABREF6": { "text": "Results for baseline NMT on Dev and Test sets for OdiEnCorp 2.0.", "type_str": "table", "content": "", "num": null, "html": null } } } }