|
--- |
|
dataset_info: |
|
features: |
|
- name: audio |
|
struct: |
|
- name: bytes |
|
dtype: binary |
|
- name: path |
|
dtype: string |
|
- name: duration |
|
dtype: float64 |
|
- name: text |
|
dtype: string |
|
- name: reciter |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 2315694478.08891 |
|
num_examples: 4000 |
|
- name: test |
|
num_bytes: 868385429.2833413 |
|
num_examples: 1500 |
|
download_size: 3081675303 |
|
dataset_size: 3184079907.3722515 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
task_categories: |
|
- automatic-speech-recognition |
|
language: |
|
- ar |
|
tags: |
|
- quran |
|
- ASR |
|
- Islam |
|
- tarteel |
|
- verses |
|
- arabic |
|
- religion |
|
size_categories: |
|
- 1K<n<10K |
|
pretty_name: Quran_data_everyayah |
|
--- |
|
|
|
## Dataset Details |
|
|
|
Part of (tarteel_ai_everyayah_dataset) with shuffle applaying on it |
|
This dataset is a collection of Quranic verses and their transcriptions, |
|
with diacritization, by different reciters. |
|
|
|
## NOTE |
|
|
|
# make sure you use the function dataset.cast_column("audio", Audio(sampling_rate=16_000)) on dataset |
|
# to convert audio from a Byte to structure in Data Instances. |
|
|
|
### Dataset Description |
|
|
|
This data was created specifically because the original data (tarteel_ai_everyayah_dataset) is very large in size, |
|
which may cause problems during downloading and of course a large space, whether the device space or one of the cloud sites is used, |
|
and it is sufficient to train models such as ASR and reciter classification. |
|
|
|
## هذه البيانات قد تم عملها خصيصا بسب ان البيانات الاصلية |
|
## حجمها كبير جدا مما قد يسبب مشاكل اثناء تحميلها و بطبع مساحة كبيرة سواء تم استخدام مساحة الجهاز او احد مواقع السحابية |
|
## و هي كافية لتدريب النماذج |
|
|
|
|
|
- **Curated by:** [abo_salah , tarteel Ai company] |
|
- **Language(s) (NLP):** [Arabic] |
|
|
|
### Dataset Sources |
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
https://huggingface.co/datasets/tarteel-ai/everyayah |
|
|
|
- **Repository:** [tarteel-ai/everyayah] |
|
|
|
|
|
## Data Instances |
|
|
|
A typical data point comprises the audio file audio, and its transcription called text. |
|
The duration is in seconds, and the author is reciter. |
|
|
|
An example from the dataset is: |
|
|
|
{ |
|
## 'audio': { |
|
'path': None, |
|
'array': array([ 0. , 0. , 0. , ..., -0.00057983, |
|
-0.00085449, -0.00061035]), |
|
'sampling_rate': 16000 |
|
} |
|
## 'duration': 6.478375, |
|
## 'text': 'بِسْمِ اللَّهِ الرَّحْمَنِ الرَّحِيمِ', |
|
## 'reciter': 'abdulsamad' |
|
} |
|
|
|
|
|
# Data Fields |
|
# audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, |
|
# and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"]. |
|
# sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. |
|
# Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should |
|
# always be preferred over dataset["audio"][0]. |
|
|
|
# text: The transcription of the audio file. |
|
|
|
# duration: The duration of the audio file. |
|
|
|
# reciter: The reciter of the verses. |
|
|
|
|
|
## Data Splits |
|
## Train Test |
|
## 4000 1500 |
|
|