Update README.md
Browse files
README.md
CHANGED
@@ -32,17 +32,17 @@ size_categories:
|
|
32 |
multiliguality: monolingual
|
33 |
---
|
34 |
|
35 |
-
|
|
|
|
|
|
|
|
|
36 |
|
37 |
## Dataset Description
|
38 |
- **Homepage:** [ClArTTS](http://www.clartts.com/)
|
39 |
- **Paper:** [ClARTTS: An Open-Source Classical Arabic Text-to-Speech Corpus](https://www.isca-archive.org/interspeech_2023/kulkarni23_interspeech.pdf)
|
40 |
|
41 |
-
### Dataset Summary
|
42 |
-
We present a speech corpus for Classical Arabic Text-to-Speech (ClArTTS) to support the development of end-to-end TTS systems for Arabic. The speech is extracted from a LibriVox audiobook, which is then processed, segmented, and manually transcribed and annotated. The final ClArTTS corpus contains about 12 hours of speech from a single male speaker sampled at 40100 kHz.
|
43 |
-
|
44 |
## Dataset Structure
|
45 |
-
### Data Instances
|
46 |
|
47 |
A typical data point comprises the name of the audio file, called 'file', its transcription, called `text`, the audio as an array, called 'audio'. Some additional information; sampling rate and audio duration.
|
48 |
|
@@ -59,11 +59,6 @@ DatasetDict({
|
|
59 |
})
|
60 |
```
|
61 |
|
62 |
-
### Data Splits
|
63 |
-
Data is divided into two sets;
|
64 |
-
|
65 |
-
train: with 9500 audio samples.
|
66 |
-
test: with 205 audio samples.
|
67 |
|
68 |
|
69 |
### Citation Information
|
|
|
32 |
multiliguality: monolingual
|
33 |
---
|
34 |
|
35 |
+
|
36 |
+
|
37 |
+
|
38 |
+
## Dataset Summary
|
39 |
+
We present a speech corpus for Classical Arabic Text-to-Speech (ClArTTS) to support the development of end-to-end TTS systems for Arabic. The speech is extracted from a LibriVox audiobook, which is then processed, segmented, and manually transcribed and annotated. The final ClArTTS corpus contains about 12 hours of speech from a single male speaker sampled at 40100 kHz.
|
40 |
|
41 |
## Dataset Description
|
42 |
- **Homepage:** [ClArTTS](http://www.clartts.com/)
|
43 |
- **Paper:** [ClARTTS: An Open-Source Classical Arabic Text-to-Speech Corpus](https://www.isca-archive.org/interspeech_2023/kulkarni23_interspeech.pdf)
|
44 |
|
|
|
|
|
|
|
45 |
## Dataset Structure
|
|
|
46 |
|
47 |
A typical data point comprises the name of the audio file, called 'file', its transcription, called `text`, the audio as an array, called 'audio'. Some additional information; sampling rate and audio duration.
|
48 |
|
|
|
59 |
})
|
60 |
```
|
61 |
|
|
|
|
|
|
|
|
|
|
|
62 |
|
63 |
|
64 |
### Citation Information
|