gitgato commited on
Commit
86e16df
1 Parent(s): 7a3ff49

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -3
README.md CHANGED
@@ -1,3 +1,100 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ tags:
4
+ - mms
5
+ - vits
6
+ pipeline_tag: text-to-speech
7
+ language:
8
+ - es
9
+ ---
10
+
11
+ # Massively Multilingual Speech (MMS): Spanish Text-to-Speech
12
+
13
+ This repository contains the **Spanish (spa)** language text-to-speech (TTS) model checkpoint.
14
+
15
+ This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
16
+ provide speech technology across a diverse range of languages. You can find more details about the supported languages
17
+ and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
18
+ and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
19
+
20
+ MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards.
21
+
22
+ ## Model Details
23
+
24
+ VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
25
+ speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
26
+ autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
27
+
28
+ A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
29
+ text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
30
+ much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
31
+ input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
32
+ synthesise speech with different rhythms from the same input text.
33
+
34
+ The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
35
+ To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
36
+ inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
37
+ waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
38
+ the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
39
+
40
+ For the MMS project, a separate VITS checkpoint is trained on each langauge.
41
+
42
+ ## Usage
43
+
44
+ MMS-TTS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint,
45
+ first install the latest version of the library:
46
+
47
+ ```
48
+ pip install --upgrade transformers accelerate
49
+ ```
50
+
51
+ Then, run inference with the following code-snippet:
52
+
53
+ ```python
54
+ from transformers import VitsModel, AutoTokenizer
55
+ import torch
56
+
57
+ model = VitsModel.from_pretrained("facebook/mms-tts-spa")
58
+ tokenizer = AutoTokenizer.from_pretrained("facebook/mms-tts-spa")
59
+
60
+ text = "some example text in the Spanish language"
61
+ inputs = tokenizer(text, return_tensors="pt")
62
+
63
+ with torch.no_grad():
64
+ output = model(**inputs).waveform
65
+ ```
66
+
67
+ The resulting waveform can be saved as a `.wav` file:
68
+
69
+ ```python
70
+ import scipy
71
+
72
+ scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output)
73
+ ```
74
+
75
+ Or displayed in a Jupyter Notebook / Google Colab:
76
+
77
+ ```python
78
+ from IPython.display import Audio
79
+
80
+ Audio(output, rate=model.config.sampling_rate)
81
+ ```
82
+
83
+
84
+
85
+ ## BibTex citation
86
+
87
+ This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
88
+
89
+ ```
90
+ @article{pratap2023mms,
91
+ title={Scaling Speech Technology to 1,000+ Languages},
92
+ author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
93
+ journal={arXiv},
94
+ year={2023}
95
+ }
96
+ ```
97
+
98
+ ## License
99
+
100
+ The model is licensed as **CC-BY-NC 4.0**.