Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Arabic
Libraries:
Datasets
Dask
License:
ClArTTS / README.md
herwoww's picture
Update README.md
bca19c2 verified
|
raw
history blame
2.07 kB
---
license: cc-by-4.0
dataset_info:
features:
- name: text
dtype: string
- name: file
dtype: string
- name: audio
sequence: float64
- name: sampling_rate
dtype: int64
- name: duration
dtype: float64
splits:
- name: train
num_bytes: 12889189484
num_examples: 9500
- name: test
num_bytes: 283646282
num_examples: 205
download_size: 3201049372
dataset_size: 13172835766
task_categories:
- text-to-speech
- text-to-audio
language:
- ar
pretty_name: ClArTTS
size_categories:
- 1K<n<10K
multiliguality: monolingual
---
## Dataset Summary
We present a speech corpus for Classical Arabic Text-to-Speech (ClArTTS) to support the development of end-to-end TTS systems for Arabic. The speech is extracted from a LibriVox audiobook, which is then processed, segmented, and manually transcribed and annotated. The final ClArTTS corpus contains about 12 hours of speech from a single male speaker sampled at 40100 kHz.
## Dataset Description
- **Homepage:** [ClArTTS](http://www.clartts.com/)
- **Paper:** [ClARTTS: An Open-Source Classical Arabic Text-to-Speech Corpus](https://www.isca-archive.org/interspeech_2023/kulkarni23_interspeech.pdf)
## Dataset Structure
A typical data point comprises the name of the audio file, called 'file', its transcription, called `text`, the audio as an array, called 'audio'. Some additional information; sampling rate and audio duration.
```
DatasetDict({
train: Dataset({
features: ['text', 'file', 'audio', 'sampling_rate', 'duration'],
num_rows: 9500
})
test: Dataset({
features: ['text', 'file', 'audio', 'sampling_rate', 'duration'],
num_rows: 205
})
})
```
### Citation Information
```
@inproceedings{kulkarni2023clartts,
author={Ajinkya Kulkarni and Atharva Kulkarni and Sara Abedalmon'em Mohammad Shatnawi and Hanan Aldarmaki},
title={ClArTTS: An Open-Source Classical Arabic Text-to-Speech Corpus},
year={2023},
booktitle={2023 INTERSPEECH },
pages={5511--5515},
doi={10.21437/Interspeech.2023-2224}
}
```