valhalla commited on
Commit
ff10323
1 Parent(s): e9ada2e
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: multilingual
3
+ tags:
4
+ - mbart-50
5
+ ---
6
+
7
+ # mBART-50
8
+
9
+ mBART-50 is a multilingual Sequence-to-Sequence model pre-trained using the "Multilingual Denoising Pretraining" objective. It was introduced in [Multilingual Translation with Extensible Multilingual Pretraining and Finetuning](https://arxiv.org/abs/2008.00401) paper.
10
+
11
+ ## Model description
12
+
13
+ mBART-50 is a multilingual Sequence-to-Sequence model. It was introduced to show that multilingual translation models can be created through multilingual fine-tuning.
14
+ Instead of fine-tuning on one direction, a pre-trained model is fine-tuned on many directions simultaneously. mBART-50 is created using the original mBART model and extended to add extra 25 languages to support multilingual machine translation models of 50 languages. The pre-training objective is explained below.
15
+
16
+ **Multilingual Denoising Pretraining**: The model incorporates N languages by concatenating data:
17
+ `D = {D1, ..., DN }` where each Di is a collection of monolingual documents in language `i`. The source documents are noised using two schemes,
18
+ first randomly shuffling the original sentences' order, and second a novel in-filling scheme,
19
+ where spans of text are replaced with a single mask token. The model is then tasked to reconstruct the original text.
20
+ 35% of each instance's words are masked by random sampling a span length according to a Poisson distribution `(λ = 3.5)`.
21
+ The decoder input is the original text with one position offset. A language id symbol `LID` is used as the initial token to predict the sentence.
22
+
23
+
24
+ ## Intended uses & limitations
25
+
26
+ `mbart-large-50` is pre-trained model and primarily aimed at being fine-tuned on translation tasks. It can also be fine-tuned on other multilingual sequence-to-sequence tasks.
27
+ See the [model hub](https://huggingface.co/models?filter=mbart-50) to look for fine-tuned versions.
28
+
29
+
30
+ ## Training
31
+
32
+ As the model is multilingual, it expects the sequences in a different format. A special language id token is used as a prefix in both the source and target text. The text format is `[lang_code] X [eos]` with `X` being the source or target text respectively and `lang_code` is `source_lang_code` for source text and `tgt_lang_code` for target text. `bos` is never used. Once the examples are prepared in this format, it can be trained as any other sequence-to-sequence model.
33
+
34
+
35
+ ```python
36
+ from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
37
+
38
+ model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50")
39
+ tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
40
+
41
+ src_text = " UN Chief Says There Is No Military Solution in Syria"
42
+ tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
43
+
44
+ model_inputs = tokenizer(src_text, return_tensors="pt")
45
+ with tokenizer.as_target_tokenizer():
46
+ labels = tokenizer(tgt_text, return_tensors="pt").input_ids
47
+
48
+ model(**model_inputs, labels=labels) # forward pass
49
+ ```
50
+
51
+
52
+
53
+ ## Languages covered
54
+ Arabic (ar_AR), Czech (cs_CZ), German (de_DE), English (en_XX), Spanish (es_XX), Estonian (et_EE), Finnish (fi_FI), French (fr_XX), Gujarati (gu_IN), Hindi (hi_IN), Italian (it_IT), Japanese (ja_XX), Kazakh (kk_KZ), Korean (ko_KR), Lithuanian (lt_LT), Latvian (lv_LV), Burmese (my_MM), Nepali (ne_NP), Dutch (nl_XX), Romanian (ro_RO), Russian (ru_RU), Sinhala (si_LK), Turkish (tr_TR), Vietnamese (vi_VN), Chinese (zh_CN), Afrikaans (af_ZA), Azerbaijani (az_AZ), Bengali (bn_IN), Persian (fa_IR), Hebrew (he_IL), Croatian (hr_HR), Indonesian (id_ID), Georgian (ka_GE), Khmer (km_KH), Macedonian (mk_MK), Malayalam (ml_IN), Mongolian (mn_MN), Marathi (mr_IN), Polish (pl_PL), Pashto (ps_AF), Portuguese (pt_XX), Swedish (sv_SE), Swahili (sw_KE), Tamil (ta_IN), Telugu (te_IN), Thai (th_TH), Tagalog (tl_XX), Ukrainian (uk_UA), Urdu (ur_PK), Xhosa (xh_ZA), Galician (gl_ES), Slovene (sl_SI)
55
+
56
+
57
+ ## BibTeX entry and citation info
58
+ ```
59
+ @article{tang2020multilingual,
60
+ title={Multilingual Translation with Extensible Multilingual Pretraining and Finetuning},
61
+ author={Yuqing Tang and Chau Tran and Xian Li and Peng-Jen Chen and Naman Goyal and Vishrav Chaudhary and Jiatao Gu and Angela Fan},
62
+ year={2020},
63
+ eprint={2008.00401},
64
+ archivePrefix={arXiv},
65
+ primaryClass={cs.CL}
66
+ }
67
+ ```
config.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large",
3
+ "_num_labels": 3,
4
+ "activation_dropout": 0.0,
5
+ "activation_function": "gelu",
6
+ "add_bias_logits": false,
7
+ "add_final_layer_norm": true,
8
+ "architectures": [
9
+ "MBartForConditionalGeneration"
10
+ ],
11
+ "attention_dropout": 0.0,
12
+ "bos_token_id": 0,
13
+ "classif_dropout": 0.0,
14
+ "classifier_dropout": 0.0,
15
+ "d_model": 1024,
16
+ "decoder_attention_heads": 16,
17
+ "decoder_ffn_dim": 4096,
18
+ "decoder_layerdrop": 0.0,
19
+ "decoder_layers": 12,
20
+ "decoder_start_token_id": 2,
21
+ "dropout": 0.1,
22
+ "early_stopping": true,
23
+ "encoder_attention_heads": 16,
24
+ "encoder_ffn_dim": 4096,
25
+ "encoder_layerdrop": 0.0,
26
+ "encoder_layers": 12,
27
+ "eos_token_id": 2,
28
+ "forced_eos_token_id": 2,
29
+ "gradient_checkpointing": false,
30
+ "id2label": {
31
+ "0": "LABEL_0",
32
+ "1": "LABEL_1",
33
+ "2": "LABEL_2"
34
+ },
35
+ "init_std": 0.02,
36
+ "is_encoder_decoder": true,
37
+ "label2id": {
38
+ "LABEL_0": 0,
39
+ "LABEL_1": 1,
40
+ "LABEL_2": 2
41
+ },
42
+ "max_length": 200,
43
+ "max_position_embeddings": 1024,
44
+ "model_type": "mbart",
45
+ "normalize_before": true,
46
+ "normalize_embedding": true,
47
+ "num_beams": 5,
48
+ "num_hidden_layers": 12,
49
+ "output_past": true,
50
+ "pad_token_id": 1,
51
+ "scale_embedding": true,
52
+ "static_position_embeddings": false,
53
+ "transformers_version": "4.4.0.dev0",
54
+ "use_cache": true,
55
+ "vocab_size": 250054
56
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ee44fb4c607895f34b49c65091e62ba0b15b2cac5eac0c75f2c6c9dbce3cb10e
3
+ size 2444714899
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
3
+ size 5069051
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": "<mask>", "additional_special_tokens": ["ar_AR", "cs_CZ", "de_DE", "en_XX", "es_XX", "et_EE", "fi_FI", "fr_XX", "gu_IN", "hi_IN", "it_IT", "ja_XX", "kk_KZ", "ko_KR", "lt_LT", "lv_LV", "my_MM", "ne_NP", "nl_XX", "ro_RO", "ru_RU", "si_LK", "tr_TR", "vi_VN", "zh_CN", "af_ZA", "az_AZ", "bn_IN", "fa_IR", "he_IL", "hr_HR", "id_ID", "ka_GE", "km_KH", "mk_MK", "ml_IN", "mn_MN", "mr_IN", "pl_PL", "ps_AF", "pt_XX", "sv_SE", "sw_KE", "ta_IN", "te_IN", "th_TH", "tl_XX", "uk_UA", "ur_PK", "xh_ZA", "gl_ES", "sl_SI"]}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"src_lang": null, "tgt_lang": null, "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "cls_token": "<s>", "pad_token": "<pad>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": "<s>", "tokenizer_file": null, "model_max_length": 1024, "name_or_path": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large", "special_tokens_map_file": "/home/suraj/projects/mbart-50/hf_models/mbart-50-large/special_tokens_map.json"}