akthangdz commited on
Commit
b65a577
1 Parent(s): dabde58

Upload 8 files

Browse files
README.md ADDED
@@ -0,0 +1,99 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc-by-nc-4.0
4
+ tags:
5
+ - mms
6
+ - vits
7
+ pipeline_tag: text-to-speech
8
+ ---
9
+
10
+ # Massively Multilingual Speech (MMS): Vietnamese Text-to-Speech
11
+
12
+ This repository contains the **Vietnamese (vie)** language text-to-speech (TTS) model checkpoint.
13
+
14
+ This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
15
+ provide speech technology across a diverse range of languages. You can find more details about the supported languages
16
+ and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
17
+ and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
18
+
19
+ MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
20
+
21
+ ## Model Details
22
+
23
+ VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
24
+ speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
25
+ autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
26
+
27
+ A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
28
+ text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
29
+ much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
30
+ input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
31
+ synthesise speech with different rhythms from the same input text.
32
+
33
+ The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
34
+ To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
35
+ inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
36
+ waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
37
+ the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
38
+
39
+ For the MMS project, a separate VITS checkpoint is trained on each langauge.
40
+
41
+ ## Usage
42
+
43
+ MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
44
+ first install the latest version of the library:
45
+
46
+ ```
47
+ pip install --upgrade transformers accelerate
48
+ ```
49
+
50
+ Then, run inference with the following code-snippet:
51
+
52
+ ```python
53
+ from transformers import VitsModel, AutoTokenizer
54
+ import torch
55
+
56
+ model = VitsModel.from_pretrained("facebook/mms-tts-vie")
57
+ tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-vie")
58
+
59
+ text = "some example text in the Vietnamese language"
60
+ inputs = tokenizer(text, return_tensors="pt")
61
+
62
+ with torch.no_grad():
63
+ output = model(**inputs).waveform
64
+ ```
65
+
66
+ The resulting waveform can be saved as a `.wav` file:
67
+
68
+ ```python
69
+ import scipy
70
+
71
+ scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
72
+ ```
73
+
74
+ Or displayed in a Jupyter Notebook / Google Colab:
75
+
76
+ ```python
77
+ from IPython.display import Audio
78
+
79
+ Audio(output, rate=model.config.sampling_rate)
80
+ ```
81
+
82
+
83
+
84
+ ## BibTex citation
85
+
86
+ This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
87
+
88
+ ```
89
+ @article{pratap2023mms,
90
+ title={Scaling Speech Technology to 1,000+ Languages},
91
+ author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
92
+ journal={arXiv},
93
+ year={2023}
94
+ }
95
+ ```
96
+
97
+ ## License
98
+
99
+ The model is licensed as **CC-BY-NC 4.0**.
config.json ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_dropout": 0.1,
3
+ "architectures": [
4
+ "VitsModel"
5
+ ],
6
+ "attention_dropout": 0.1,
7
+ "depth_separable_channels": 2,
8
+ "depth_separable_num_layers": 3,
9
+ "duration_predictor_dropout": 0.5,
10
+ "duration_predictor_filter_channels": 256,
11
+ "duration_predictor_flow_bins": 10,
12
+ "duration_predictor_kernel_size": 3,
13
+ "duration_predictor_num_flows": 4,
14
+ "duration_predictor_tail_bound": 5.0,
15
+ "ffn_dim": 768,
16
+ "ffn_kernel_size": 3,
17
+ "flow_size": 192,
18
+ "hidden_act": "relu",
19
+ "hidden_dropout": 0.1,
20
+ "hidden_size": 192,
21
+ "initializer_range": 0.02,
22
+ "layer_norm_eps": 1e-05,
23
+ "layerdrop": 0.1,
24
+ "leaky_relu_slope": 0.1,
25
+ "model_type": "vits",
26
+ "noise_scale": 0.667,
27
+ "noise_scale_duration": 0.8,
28
+ "num_attention_heads": 2,
29
+ "num_hidden_layers": 6,
30
+ "num_speakers": 1,
31
+ "posterior_encoder_num_wavenet_layers": 16,
32
+ "prior_encoder_num_flows": 4,
33
+ "prior_encoder_num_wavenet_layers": 4,
34
+ "resblock_dilation_sizes": [
35
+ [
36
+ 1,
37
+ 3,
38
+ 5
39
+ ],
40
+ [
41
+ 1,
42
+ 3,
43
+ 5
44
+ ],
45
+ [
46
+ 1,
47
+ 3,
48
+ 5
49
+ ]
50
+ ],
51
+ "resblock_kernel_sizes": [
52
+ 3,
53
+ 7,
54
+ 11
55
+ ],
56
+ "sampling_rate": 16000,
57
+ "speaker_embedding_size": 0,
58
+ "speaking_rate": 1.0,
59
+ "spectrogram_bins": 513,
60
+ "torch_dtype": "float32",
61
+ "transformers_version": "4.33.0.dev0",
62
+ "upsample_initial_channel": 512,
63
+ "upsample_kernel_sizes": [
64
+ 16,
65
+ 16,
66
+ 4,
67
+ 4
68
+ ],
69
+ "upsample_rates": [
70
+ 8,
71
+ 8,
72
+ 2,
73
+ 2
74
+ ],
75
+ "use_bias": true,
76
+ "use_stochastic_duration_prediction": true,
77
+ "vocab_size": 95,
78
+ "wavenet_dilation_rate": 1,
79
+ "wavenet_dropout": 0.0,
80
+ "wavenet_kernel_size": 5,
81
+ "window_size": 4
82
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55ded90c3e57dc2814fa2cdfe3f9e7a5c28e1223b06c0a260a4495b080762ffd
3
+ size 145271288
preprocessor_config.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_blank": true,
3
+ "clean_up_tokenization_spaces": true,
4
+ "is_uroman": false,
5
+ "language": "vie",
6
+ "model_max_length": 1000000000000000019884624838656,
7
+ "normalize": true,
8
+ "pad_token": "ụ",
9
+ "phonemize": false,
10
+ "tokenizer_class": "VitsTokenizer",
11
+ "unk_token": "<unk>"
12
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aab7d240fb0b6c83474a15affcb70194742af8dbbf79083deb6684e162ff0cb5
3
+ size 145432498
special_tokens_map.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "pad_token": "ụ",
3
+ "unk_token": "<unk>"
4
+ }
vocab.json ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ " ": 84,
3
+ "'": 44,
4
+ "-": 94,
5
+ "2": 52,
6
+ "_": 17,
7
+ "a": 29,
8
+ "b": 88,
9
+ "c": 13,
10
+ "d": 63,
11
+ "e": 54,
12
+ "g": 21,
13
+ "h": 85,
14
+ "i": 30,
15
+ "k": 79,
16
+ "l": 82,
17
+ "m": 68,
18
+ "n": 90,
19
+ "o": 31,
20
+ "p": 78,
21
+ "q": 47,
22
+ "r": 92,
23
+ "s": 2,
24
+ "t": 80,
25
+ "u": 8,
26
+ "v": 14,
27
+ "x": 1,
28
+ "y": 75,
29
+ "à": 35,
30
+ "á": 77,
31
+ "â": 12,
32
+ "ã": 51,
33
+ "è": 3,
34
+ "é": 58,
35
+ "ê": 91,
36
+ "ì": 4,
37
+ "í": 74,
38
+ "ò": 45,
39
+ "ó": 56,
40
+ "ô": 28,
41
+ "õ": 25,
42
+ "ù": 38,
43
+ "ú": 76,
44
+ "ý": 37,
45
+ "ă": 89,
46
+ "đ": 55,
47
+ "ĩ": 23,
48
+ "ũ": 70,
49
+ "ơ": 7,
50
+ "ư": 9,
51
+ "ạ": 22,
52
+ "ả": 24,
53
+ "ấ": 81,
54
+ "ầ": 57,
55
+ "ẩ": 49,
56
+ "ẫ": 67,
57
+ "ậ": 87,
58
+ "ắ": 65,
59
+ "ằ": 10,
60
+ "ẳ": 27,
61
+ "ẵ": 42,
62
+ "ặ": 5,
63
+ "ẹ": 72,
64
+ "ẻ": 20,
65
+ "ẽ": 66,
66
+ "ế": 60,
67
+ "ề": 40,
68
+ "ể": 69,
69
+ "ễ": 41,
70
+ "ệ": 15,
71
+ "ỉ": 71,
72
+ "ị": 53,
73
+ "ọ": 48,
74
+ "ỏ": 43,
75
+ "ố": 46,
76
+ "ồ": 16,
77
+ "ổ": 34,
78
+ "ỗ": 73,
79
+ "ộ": 19,
80
+ "ớ": 59,
81
+ "ờ": 36,
82
+ "ở": 83,
83
+ "ỡ": 26,
84
+ "ợ": 93,
85
+ "ụ": 0,
86
+ "ủ": 61,
87
+ "ứ": 6,
88
+ "ừ": 32,
89
+ "ử": 62,
90
+ "ữ": 64,
91
+ "ự": 50,
92
+ "ỳ": 11,
93
+ "ỵ": 18,
94
+ "ỷ": 86,
95
+ "ỹ": 33,
96
+ "–": 39
97
+ }