--- license: cc-by-sa-3.0 license_name: cc-by-sa configs: - config_name: en data_files: en.json default: true - config_name: en-xl data_files: en-xl.json - config_name: ca data_files: ca.json - config_name: de data_files: de.json - config_name: es data_files: es.json - config_name: el data_files: el.json - config_name: fa data_files: fa.json - config_name: fi data_files: fi.json - config_name: fr data_files: fr.json - config_name: it data_files: it.json - config_name: pl data_files: pl.json - config_name: pt data_files: pt.json - config_name: ru data_files: ru.json - config_name: sv data_files: sv.json - config_name: uk data_files: uk.json - config_name: zh data_files: zh.json language: - en - ca - de - es - el - fa - fi - fr - it - pl - pt - ru - sv - uk - zh --- # Multilingual Phonemes 10K Alpha ### By [mrfakename](https://twitter.com/realmrfakename) This dataset contains approximately 10,000 pairs of text and phonemes from each supported language. Only for training StyleTTS 2-related **open source** models. ## Languages We support 15 languages, which means we have around 150,000 pairs of text and phonemes in multiple languages. This excludes the English-XL dataset, which has 100K additional phonemized pairs. * English (en) * English XL (en-xl): 100K phonemized pairs, English-only * Catalan (ca) * German (de) * Spanish (es) * Greek (el) * Persian (fa) * Finnish (fi) * French (fr) * Italian (it) * Polish (pl) * Portuguese (pt) * Russian (ru) * Swedish (sw) * Ukrainian (uk) * Chinese (zh) ## License + Credits Source data comes from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and is licensed under CC-BY-SA 3.0. This dataset is licensed under CC-BY-SA 3.0. ## Processing We utilized the following process to preprocess the dataset: 1. Download data from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) by language, selecting only the first Parquet file and naming it with the language code 2. Process using [Data Preprocessing Scripts (StyleTTS 2 Community members only)](https://huggingface.co/styletts2-community/data-preprocessing-scripts) and modify the code to work with the language 3. Script: Clean the text 4. Script: Remove ultra-short phrases 5. Script: Phonemize 6. Script: Save JSON 7. Upload dataset ## Note East-asian languages are experimental and in beta. We do not distinguish between chinese traditional and simplified, the dataset consists mainly of simplified chinese. We recommend converting characters to simplified chinese during inference using a library such as `hanziconv` or `chinese-converter`.