File size: 1,792 Bytes
ae32963 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
---
license: cc-by-nc-4.0
tags:
- mms
- vits
pipeline_tag: text-to-speech
---
# Massively Multilingual Speech (MMS) : Text-to-Speech Models
This repository contains the **Korean (kor)** language text-to-speech (TTS) model checkpoint.
This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
provide speech technology across a diverse range of languages. You can find more details about the supported languages
and their ISO 639-3 codes in the [MMS Language Coverage
Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html).
## Usage
Using this checkpoint from Hugging Face Transformers:
```python
from transformers import VitsModel, VitsMmsTokenizer
import torch
model = VitsModel.from_pretrained("Matthijs/mms-tts-kor")
tokenizer = VitsMmsTokenizer.from_pretrained("Matthijs/mms-tts-kor")
text = "some example text in the Korean language"
inputs = tokenizer(text, return_tensors="pt")
with torch.no_grad():
output = model(**inputs)
from IPython.display import Audio
Audio(output.audio[0], rate=16000)
```
Note: For this checkpoint, the input text must be converted to the Latin alphabet first using the [uroman](https://github.com/isi-nlp/uroman) tool.
## Model credits
This model was developed by Vineel Pratap et al. and is licensed as **CC-BY-NC 4.0**
@article{pratap2023mms,
title={Scaling Speech Technology to 1,000+ Languages},
author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
journal={arXiv},
year={2023}
}
|