{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T04:33:42.127171Z" }, "title": "Multilingual Neural Machine Translation involving Indian Languages", "authors": [ { "first": "Pulkit", "middle": [], "last": "Madaan", "suffix": "", "affiliation": { "laboratory": "", "institution": "UQAM Delhi India", "location": { "settlement": "Montreal", "country": "Canada" } }, "email": "" }, { "first": "Fatiha", "middle": [], "last": "Sadat", "suffix": "", "affiliation": { "laboratory": "", "institution": "UQAM Delhi India", "location": { "settlement": "Montreal", "country": "Canada" } }, "email": "sadat.fatiha@uqam.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural Machine Translations (NMT) models are capable of translating a single bilingual pair and require a new model for each new language pair. Multilingual Neural Machine Translation models are capable of translating multiple language pairs, even pairs which it hasn't seen before in training. Availability of parallel sentences is a known problem in machine translation. Multilingual NMT model leverages information from all the languages to improve itself and performs better. We propose a data augmentation technique that further improves this model profoundly. The technique helps achieve a jump of more than 15 points in BLEU score from the Multilingual NMT Model. A BLEU score of 36.2 was achieved for Sindhi-English translation, which is higher than any score on the leaderboard of the LoResMT SharedTask at MT Summit 2019, which provided the data for the experiments.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Neural Machine Translations (NMT) models are capable of translating a single bilingual pair and require a new model for each new language pair. Multilingual Neural Machine Translation models are capable of translating multiple language pairs, even pairs which it hasn't seen before in training. Availability of parallel sentences is a known problem in machine translation. Multilingual NMT model leverages information from all the languages to improve itself and performs better. We propose a data augmentation technique that further improves this model profoundly. The technique helps achieve a jump of more than 15 points in BLEU score from the Multilingual NMT Model. A BLEU score of 36.2 was achieved for Sindhi-English translation, which is higher than any score on the leaderboard of the LoResMT SharedTask at MT Summit 2019, which provided the data for the experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "A lot of the models for end-to-end NMT are trained for single language pairs. Google's Multilingual NMT (Johnson et al., 2017 ) is a single model capable of translating to and from many languages. The model is fed a token identifying a target language uniquely along with the source language sentence. This allows the model to translate between pairs for which the model hasn't seen parallel data, essentially zero-shot translations. The model is also able to improve upon individual translation qualities too by the help of other languages. NMT suffers from the lack of data. And as Arivazhagan et al.(2019b) and Koehn et al.(2017) too recognize, lack of data makes NMT a non-trivial challenge for low-resource languages. Multilingual NMT is a step towrds solving this problem which leverages data from other language pairs and does an implicit transfer learning. We propose to improve this quality further with a data-augmentation technique that was able to improve the BLEU scores two fold in our experiments. The technique is simple and can work with any model. We show that increasing the amount of data available for training artificially with our technique in a way as simple as just swapping the source with target sentences and using the same sentence as source and target can improve the BLEU scores significantly. Also, we show that since all language pairs share the same encoder and the same decoder, in a case of transfer learning, the model is able to leverage data from rich resource language pairs for learning better translations for low-resource pairs. Using Hindi-English data in training improved the BLEU scores for {Bhojpuri, Sindhi, Magahi}<>English. The structure of the present paper is described as follows: Section 2 presents the state of the art. Section 3 presents our proposed methodology. Section 4 describes the corpora used in this research. In section 5, we put forward our experiments and evaluations, perform an ablative analysis and compare our system's performance with other Google's Neural Machine Translation (Johnson et al., 2017) . Section 6, compares our results with other methods that participated in the LoResMT Shared Task (Karakanta et al., 2019) at the MT Summit 2019. Finally in Section 7, we state our conclusions and perspectives for future research.", "cite_spans": [ { "start": 104, "end": 125, "text": "(Johnson et al., 2017", "ref_id": null }, { "start": 584, "end": 609, "text": "Arivazhagan et al.(2019b)", "ref_id": "BIBREF3" }, { "start": 614, "end": 632, "text": "Koehn et al.(2017)", "ref_id": null }, { "start": 2051, "end": 2073, "text": "(Johnson et al., 2017)", "ref_id": null }, { "start": 2172, "end": 2196, "text": "(Karakanta et al., 2019)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "Significant progress has been made in end-to-end NMT (Cho et al., 2014; Sutskeveret al., 2014; Bahdanau et al., 2015) and some work has been done to adapt it to a multilingual setting. But, before the mulitilingual approach of Johnson et al., 2017 , none of the approaches have a single model capable of dealing with multiple language pairs in a many-to-many setting. Dong et al.(2015) use different decoders and attention layers for different target languages. Firat et al.(2016) use a shared attention layer but an encoder per source language and a decoder per target language. Lee et al.(2017) use a single model with the whole model shared across all pairs but it can only be used for a single target language. The model proposed by Johnson et al.(2017) has a single model for a many-to-many task and is able to perform in zero-shot setting too wher it can translate sentences between pairs whose parallel data wasn't seen by the model during training. Arivazhagan et al.(2019a) also propose a model for zero-shot translation that improves upon Google's Multilingual NMT Model (Johnson et al., 2017) and achieves results on par with pivoting. They propose English as the pivot language and feed the target language token to the decoder instead of the encoder. In order to improve the independence of encoder on source language they maximise the similarity between all sentence vectors and their English parallel sentence embeddings and minimize the translation cross-entropy loss. They use a discriminator and train the encoder adversarially for similarity maximisation. Artetxe et al.(2018) and Yang et al.(2018) also train the encoder adversarially to learn a shared latent space. There has been a lot of work done to improve NMT models using data augmentation. Sennrich et al (2016a) proposed automatic back-translation to augment the dataset. But, as mentioned in SwitchOut (Wang et al., 2018) faces challenges in initial models. Fadaee et al. 2017propose Figure 1 : An example of different augments. Here the low resource pair of languages is English-Hindi, and the high resource pair language set is English-French a data augmentation technique where they synthesise new data by replacing a common word in the source sentence with a rare word and the corresponding word in the target sentence with its translation. And to maintain the syntactic validity of the sentence, they use an LSTM language model. Zhu et al. 2019propose a method in which they obtain parallel sentences from multilingual websites. They scrape the websites to get monolingual data on which they learn word embeddings. These embeddings are used to induce a bilingual lexicon and then use a trained model to identify parallel sentences. Ours is a much simpler way, which does not require an additional model, is end-to-ed trainable and is still at par with some Statistical Machine Translation methods submitted at the SharedTask.", "cite_spans": [ { "start": 53, "end": 71, "text": "(Cho et al., 2014;", "ref_id": "BIBREF6" }, { "start": 72, "end": 94, "text": "Sutskeveret al., 2014;", "ref_id": null }, { "start": 95, "end": 117, "text": "Bahdanau et al., 2015)", "ref_id": "BIBREF5" }, { "start": 227, "end": 247, "text": "Johnson et al., 2017", "ref_id": null }, { "start": 368, "end": 385, "text": "Dong et al.(2015)", "ref_id": null }, { "start": 462, "end": 480, "text": "Firat et al.(2016)", "ref_id": "BIBREF10" }, { "start": 580, "end": 596, "text": "Lee et al.(2017)", "ref_id": "BIBREF17" }, { "start": 737, "end": 757, "text": "Johnson et al.(2017)", "ref_id": null }, { "start": 957, "end": 982, "text": "Arivazhagan et al.(2019a)", "ref_id": null }, { "start": 1081, "end": 1103, "text": "(Johnson et al., 2017)", "ref_id": null }, { "start": 1575, "end": 1595, "text": "Artetxe et al.(2018)", "ref_id": null }, { "start": 1600, "end": 1617, "text": "Yang et al.(2018)", "ref_id": "BIBREF24" }, { "start": 1882, "end": 1901, "text": "(Wang et al., 2018)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 1964, "end": 1972, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Related Work", "sec_num": "2." }, { "text": "The technique we propose is simple consists of four components named Forward, Backward, Self and High. Forward augmentation is the given data itself. Backward augmentation is generated by switching the source and target label in the Forward Data, so the source sentence becomes the target sentence and vice versa in parallel sentence pair. Self augmentation is generated by using only the required language from the parallel sentences and cloning them as their own target sentences, so the source and target sentence are the same. An example of the augmentations is shown in Figure 1 . We know that translation models improve with increase in data and since we also have the same encoder for every language, we can use a language pair that is similar to the language pairs of the task and is a high resource pair to further improve the encoder in encoding source independent embeddings, for transfer learning through the Multilingual architecture of Johnson et al.(2017) . So we propose Multilingual+ which uses the above mentioned three augmentations (Forward, Backward, Self) along with High augmentation; High augmentation consists of highresource language pairs, like Hindi-English parallel data, in Forward, Backward and Self augmentations. This helps in improving the translation models of low resource pairs; {Bhojpuri, Sindhi, Magahi}<>English. Data for pairs 1-3 were made available at the Shared Task at MT Summit 2019. While data for pair 4 was obtained from the IIT Bombay English-Hindi Corpus (Kunchukuttan et al., 2018) . The Train-Val-Test splits were used as given by the respective data providers.", "cite_spans": [ { "start": 950, "end": 970, "text": "Johnson et al.(2017)", "ref_id": null }, { "start": 1506, "end": 1533, "text": "(Kunchukuttan et al., 2018)", "ref_id": null } ], "ref_spans": [ { "start": 575, "end": 583, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "The Proposed Methodology", "sec_num": "3." }, { "text": "We performed experiments on the Multingual+ model and showed how the addition of each of augmentations we proposed improves the performance by an ablative analysis. After augmentation, the source sentences get a target language token prepended. Joint Byte-Pair Encoding is learnt for subword segmentation (Sennrich et al., 2016b) to address the problem of rare words. Byte-Pair encoding was learnt over the training data and was used to segment subwords for both the training and the test data. A Joint dictionary was learnt over all the languages. This is the only pre-processing that we do besides the augmentation. The basic architecture is the same as in Johnson et al.(2017) . A single encoder and decoder shared over all the languages. Adam (Kingma and Ba, 2015) optimizer was use, with initial beta values of 0.9 and 0.98 along with label smoothing and dropout(0.3). Following are the augmentations included in Multinlingual+", "cite_spans": [ { "start": 305, "end": 329, "text": "(Sennrich et al., 2016b)", "ref_id": "BIBREF21" }, { "start": 659, "end": 679, "text": "Johnson et al.(2017)", "ref_id": null }, { "start": 747, "end": 768, "text": "(Kingma and Ba, 2015)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "5." }, { "text": "Sindhi-to-English, Bhojpuri-to-English, Magahi-to-English", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Forward", "sec_num": null }, { "text": "\u2022 Backward English-to-Sindhi, English-to-Bhojpuri, English-to-Magahi", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Forward", "sec_num": null }, { "text": "\u2022 Self Sindhi-to-Sindhi, Bhojpuri-to-Bhojpuri, Magahi-to-Magahi, English-to-English", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Forward", "sec_num": null }, { "text": "\u2022 High Hindi-to-English, English-to-Hindi, Hindi-to-Hindi \u2022 Base This is the standard model as used in (Johnson et al., 2017) , hence it uses only Forward and forms our baseline.", "cite_spans": [ { "start": 103, "end": 125, "text": "(Johnson et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "\u2022 Forward", "sec_num": null }, { "text": "\u2022 Base + Back We add Backward augmentation to the baseline model", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Forward", "sec_num": null }, { "text": "\u2022 Base + Back + Self We add Self & Backward augmentation to the baseline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Forward", "sec_num": null }, { "text": "\u2022 Multilingual+ This uses all the augmentations:High along with Forward, Backward & Self.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "\u2022 Forward", "sec_num": null }, { "text": "Parameters and training procedures are set as in Johnson et al.(2017) . PyTorch Sequence-to-Sequence library, fairseq (Ott et al., 2019) , was used to run the experiments. Table 1 shows that Multilingual+ consistently outperforms the others. The table also confirms that the more augmentations you add to the Multilingual NMT model (Johnson et al., 2017) , the more it improves. Adding Backward, then Self and then a new language pair improved the results at each level. All the BLEU scores reported, except star ( * ) marked, are calculated using SacreBLEU (Post, 2018) on the development set provided.", "cite_spans": [ { "start": 49, "end": 69, "text": "Johnson et al.(2017)", "ref_id": null }, { "start": 118, "end": 136, "text": "(Ott et al., 2019)", "ref_id": "BIBREF18" }, { "start": 333, "end": 355, "text": "(Johnson et al., 2017)", "ref_id": null }, { "start": 559, "end": 571, "text": "(Post, 2018)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 172, "end": 180, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "\u2022 Forward", "sec_num": null }, { "text": "We compared our results with other models submitted at the LoResMT Shared Task at the MT Summit 2019. The submission to the Shared Task followed a naming convention to distinguish between different types of corpora used, which we will follow too. The different types of corpora and their abbreviations are as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparisons", "sec_num": "6." }, { "text": "\u2022 Only the provided parallel corpora [-a] \u2022 Only the provided parallel and monolingual corpora [-b] Using these abbreviations the methods were named in the following manner\"", "cite_spans": [ { "start": 37, "end": 41, "text": "[-a]", "ref_id": null }, { "start": 95, "end": 99, "text": "[-b]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Comparisons", "sec_num": "6." }, { "text": "---", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Comparisons", "sec_num": "6." }, { "text": "Our Team Code was L19T3 and we submitted Base (as Method 1), Base+Back (as Method 2) and Base+Back+Self (as Method 3) all under -a category. Multilingual+ was developed later. Table 2 shows the top 3 performers in different translation directions along with Multilingual+. Method3-b from team L19T2 is a Phrase Based Statistical Machine Translation model. While their Method2-a is an NMT model that uses a sequence-to-sequence approach along with self-attention. pbmt-a model from team L19T5 is again a Phrase Based Statistical Machine Translation model. While their xform-a model is an NMT model. Both of the NMT models of the other teams train a different model for different language pairs, one for each, while ours is a one for all model. Multilingual+ is the best performer in Sin-to-Eng and Mag-to-Eng task, second best performer in Eng-to-Sin and Bho-to-Eng tasks. These results show the superiority of our simple approach. Our data augmentation technique is comparable or better than the best of the methods on the leaderboard of the SharedTask. In Eng-to-Sin task L19T2-Eng2Sin-Method3-b scores the best while the second best is Multinlingual+. This could be because the former is a Statistical Machine Translation Model. Though, it surpasses the L19T2's NMT model. For Bho-to-Eng it is able to surpass pbmt-a of team L19T5 it still lags behind their NMT model. This can be explained as we have more data for Sindhi than Bhojpuri and though we were able to improve the performance by augmenting data, it still remains behind statistical machine translation approach of L19T2. The success of our simple approach can be attributed to its conjunction with Multilingual NMT. Multilingual NMT is able to use data of all langugaes to improve them all together, and by even further increasing this data, we improve the model greatly.", "cite_spans": [], "ref_spans": [ { "start": 176, "end": 183, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Comparisons", "sec_num": "6." }, { "text": "We have presented a simple data augmentation technique coupled with a multilingual transformer that gives a jump of 15 points in BLEU score without any new data and 20 points in BLEU score if a rich resource language pair is introduced, over a standard multilingual transformer. It performs at par or better than best models submitted at the Shared Task. This demonstrates that a multilingual transformer is sensitive to the amount of data used and a simple augmentation technique like ours can provide a significant boost in BLEU scores. Back-translation (Sennrich et al., 2016a) can be coupled with our approach to experiment and analyse the effectiveness of this amalgam.", "cite_spans": [ { "start": 556, "end": 580, "text": "(Sennrich et al., 2016a)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusion and Future Work", "sec_num": "7." } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Sin-to-Eng Eng-to-Sin 1 L19T2-Sin2Eng-Method3-b 31.32 L19T2-Eng2Sin-Method3-b 37.58 2 Base+Back+Self 30.77 L19T2-Eng2Sin-Method2-a 25", "authors": [], "year": null, "venue": "", "volume": "17", "issue": "", "pages": "19--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sin-to-Eng Eng-to-Sin 1 L19T2-Sin2Eng-Method3-b 31.32 L19T2-Eng2Sin-Method3-b 37.58 2 Base+Back+Self 30.77 L19T2-Eng2Sin-Method2-a 25.17 3 L19T5-sin2eng-xform-a", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Rank Bho-to-Eng Mag-to-Eng 1 L19T2-Bho2Eng-Method3-b 17.03 L19T2-Mag2Eng-Method3-b 9", "authors": [], "year": null, "venue": "", "volume": "71", "issue": "", "pages": "19--24", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rank Bho-to-Eng Mag-to-Eng 1 L19T2-Bho2Eng-Method3-b 17.03 L19T2-Mag2Eng-Method3-b 9.71 2 L19T5-bho2eng-xform-a", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Table 2: Top 3 performers in LoResMT Shared Task in different translation directions along with Multilingual+ 8", "authors": [ { "first": "N", "middle": [], "last": "Bapna", "suffix": "" }, { "first": "A", "middle": [], "last": "Firat", "suffix": "" }, { "first": "O", "middle": [], "last": "Aharoni", "suffix": "" }, { "first": "R", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "M", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "W", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "Bibliographical References Arivazhagan", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Table 2: Top 3 performers in LoResMT Shared Task in different translation directions along with Multilingual+ 8. Bibliographical References Arivazhagan, N., Bapna, A., Firat, O., Aharoni, R., John- son, M., and Macherey, W. (2019a). The missing in- gredient in zero-shot neural machine translation. CoRR, abs/1903.07091.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Massively multilingual neural machine translation in the wild: Findings and challenges", "authors": [ { "first": "N", "middle": [], "last": "Arivazhagan", "suffix": "" }, { "first": "A", "middle": [], "last": "Bapna", "suffix": "" }, { "first": "O", "middle": [], "last": "Firat", "suffix": "" }, { "first": "D", "middle": [], "last": "Lepikhin", "suffix": "" }, { "first": "M", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "M", "middle": [], "last": "Krikun", "suffix": "" }, { "first": "M", "middle": [ "X" ], "last": "Chen", "suffix": "" }, { "first": "Y", "middle": [], "last": "Cao", "suffix": "" }, { "first": "G", "middle": [], "last": "Foster", "suffix": "" }, { "first": "C", "middle": [], "last": "Cherry", "suffix": "" }, { "first": "W", "middle": [], "last": "Macherey", "suffix": "" }, { "first": "Z", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Y", "middle": [], "last": "Wu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arivazhagan, N., Bapna, A., Firat, O., Lepikhin, D., John- son, M., Krikun, M., Chen, M. X., Cao, Y., Foster, G., Cherry, C., Macherey, W., Chen, Z., and Wu, Y. (2019b). Massively multilingual neural machine translation in the wild: Findings and challenges. CoRR, abs/1907.05019.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Unsupervised neural machine translation", "authors": [ { "first": "M", "middle": [], "last": "Artetxe", "suffix": "" }, { "first": "G", "middle": [], "last": "Labaka", "suffix": "" }, { "first": "E", "middle": [], "last": "Agirre", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Artetxe, M., Labaka, G., Agirre, E., and Cho, K. (2017). Unsupervised neural machine translation. CoRR, abs/1710.11041.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Neural machine translation by jointly learning to align and translate", "authors": [ { "first": "D", "middle": [], "last": "Bahdanau", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bahdanau, D., Cho, K., and Bengio, Y. (2015). Neural ma- chine translation by jointly learning to align and trans- late. CoRR, abs/1409.0473.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation", "authors": [ { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "B", "middle": [], "last": "Van Merrienboer", "suffix": "" }, { "first": "", "middle": [], "last": "G\u00fcl\u00e7ehre", "suffix": "" }, { "first": "F", "middle": [], "last": "Bougares", "suffix": "" }, { "first": "H", "middle": [], "last": "Schwenk", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2014, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Cho, K., van Merrienboer, B., \u00c7 aglar G\u00fcl\u00e7ehre, Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using rnn encoder-decoder for sta- tistical machine translation. ArXiv, abs/1406.1078.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Multi-task learning for multiple language translation", "authors": [], "year": null, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Multi-task learning for multiple language translation. In ACL.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Data augmentation for low-resource neural machine translation", "authors": [ { "first": "M", "middle": [], "last": "Fadaee", "suffix": "" }, { "first": "A", "middle": [], "last": "Bisazza", "suffix": "" }, { "first": "C", "middle": [], "last": "Monz", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "2", "issue": "", "pages": "567--573", "other_ids": {}, "num": null, "urls": [], "raw_text": "Fadaee, M., Bisazza, A., and Monz, C. (2017). Data aug- mentation for low-resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 2: Short Papers), pages 567-573, Vancouver, Canada, July. Asso- ciation for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Multi-way, multilingual neural machine translation with a shared attention mechanism", "authors": [ { "first": "O", "middle": [], "last": "Firat", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Y", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2016, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Firat, O., Cho, K., and Bengio, Y. (2016). Multi-way, mul- tilingual neural machine translation with a shared atten- tion mechanism. ArXiv, abs/1601.01073.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Google's multilingual neural machine translation system: Enabling zero-shot translation", "authors": [], "year": null, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "339--351", "other_ids": {}, "num": null, "urls": [], "raw_text": "Google's multilingual neural machine translation sys- tem: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339-351.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "European Association for Machine Translation", "authors": [ { "first": "Alina", "middle": [], "last": "Karakanta", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alina Karakanta, et al., editors. (2019). Proceedings of the 2nd Workshop on Technologies for MT of Low Resource Languages, Dublin, Ireland, August. European Associa- tion for Machine Translation.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "D", "middle": [ "P" ], "last": "Kingma", "suffix": "" }, { "first": "J", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "3rd International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceed- ings.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The IIT bombay english-hindi parallel corpus. Language Resources and Evaluation Conference", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "The IIT bombay english-hindi parallel corpus. Language Resources and Evaluation Conference, 10.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Fully character-level neural machine translation without explicit segmentation", "authors": [ { "first": "J", "middle": [ "D" ], "last": "Lee", "suffix": "" }, { "first": "K", "middle": [], "last": "Cho", "suffix": "" }, { "first": "T", "middle": [], "last": "Hofmann", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "365--378", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee, J. D., Cho, K., and Hofmann, T. (2017). Fully character-level neural machine translation without ex- plicit segmentation. Transactions of the Association for Computational Linguistics, 5:365-378.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "fairseq: A fast, extensible toolkit for sequence modeling", "authors": [ { "first": "M", "middle": [], "last": "Ott", "suffix": "" }, { "first": "S", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "A", "middle": [], "last": "Baevski", "suffix": "" }, { "first": "A", "middle": [], "last": "Fan", "suffix": "" }, { "first": "S", "middle": [], "last": "Gross", "suffix": "" }, { "first": "N", "middle": [], "last": "Ng", "suffix": "" }, { "first": "D", "middle": [], "last": "Grangier", "suffix": "" }, { "first": "Auli", "middle": [], "last": "", "suffix": "" }, { "first": "M", "middle": [], "last": "", "suffix": "" } ], "year": 2019, "venue": "Proceedings of NAACL-HLT 2019: Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ott, M., Edunov, S., Baevski, A., Fan, A., Gross, S., Ng, N., Grangier, D., and Auli, M. (2019). fairseq: A fast, ex- tensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "A call for clarity in reporting BLEU scores", "authors": [ { "first": "M", "middle": [], "last": "Post", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Third Conference on Machine Translation: Research Papers", "volume": "", "issue": "", "pages": "186--191", "other_ids": {}, "num": null, "urls": [], "raw_text": "Post, M. (2018). A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Ma- chine Translation: Research Papers, pages 186-191, Belgium, Brussels, October. Association for Computa- tional Linguistics.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "R", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "B", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "A", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "ArXiv", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sennrich, R., Haddow, B., and Birch, A. (2016a). Improv- ing neural machine translation models with monolingual data. ArXiv, abs/1511.06709.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "R", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "B", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "A", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1715--1725", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sennrich, R., Haddow, B., and Birch, A. (2016b). Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Pa- pers), pages 1715-1725, Berlin, Germany, August. As- sociation for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Sequence to sequence learning with neural networks", "authors": [ { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "O", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" } ], "year": 2014, "venue": "NIPS", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In NIPS.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "SwitchOut: an efficient data augmentation algorithm for neural machine translation", "authors": [ { "first": "X", "middle": [], "last": "Wang", "suffix": "" }, { "first": "H", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Z", "middle": [], "last": "Dai", "suffix": "" }, { "first": "G", "middle": [], "last": "Neubig", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "856--861", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wang, X., Pham, H., Dai, Z., and Neubig, G. (2018). SwitchOut: an efficient data augmentation algorithm for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 856-861, Brussels, Belgium, October- November. Association for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Unsupervised neural machine translation with weight sharing", "authors": [ { "first": "Z", "middle": [], "last": "Yang", "suffix": "" }, { "first": "W", "middle": [], "last": "Chen", "suffix": "" }, { "first": "F", "middle": [], "last": "Wang", "suffix": "" }, { "first": "B", "middle": [], "last": "Xu", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yang, Z., Chen, W., Wang, F., and Xu, B. (2018). Unsu- pervised neural machine translation with weight sharing. CoRR, abs/1804.09057.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Parallel data from four different language pairs are used in the experiments. Following are the language pairs along with the number of parallel sentences of each pair:1.014) 2.710) 3.999) 4.561,840)", "uris": null }, "TABREF0": { "text": "Sin-to-Eng Eng-to-Sin Bho-to-Eng Eng-to-Bho Mag-to-Eng Eng-to-MagTable 1: BLEU scores of different language pairs and directions in the different experiments. Results on test data evaluated by the Shared Task at MT Summit 2019 committee.", "num": null, "type_str": "table", "html": null, "content": "
Base15.74 *-6.11 *-2.46 *-
Base + Back18.09 *11.38 *5.01 *0.22.55 *0.2
Base + Back + Self30.77 *18.98 *7.38 *0.64.61 *1.2
\u2020 Multilingual+36.228.815.63.713.33.5
Not submitted for the SharedTask
To understand how each augmentation improves the BLEU
score, we create 4 methods:
" } } } }