--- dataset_info: features: - name: audio struct: - name: bytes dtype: binary - name: path dtype: string - name: duration dtype: float64 - name: text dtype: string - name: reciter dtype: string splits: - name: train num_bytes: 2315694478.08891 num_examples: 4000 - name: test num_bytes: 868385429.2833413 num_examples: 1500 download_size: 3081675303 dataset_size: 3184079907.3722515 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* task_categories: - automatic-speech-recognition language: - ar tags: - quran - ASR - Islam - tarteel - verses - arabic - religion size_categories: - 1K https://huggingface.co/datasets/tarteel-ai/everyayah - **Repository:** [tarteel-ai/everyayah] ## Data Instances A typical data point comprises the audio file audio, and its transcription called text. The duration is in seconds, and the author is reciter. An example from the dataset is: { ## 'audio': { 'path': None, 'array': array([ 0. , 0. , 0. , ..., -0.00057983, -0.00085449, -0.00061035]), 'sampling_rate': 16000 } ## 'duration': 6.478375, ## 'text': 'بِسْمِ اللَّهِ الرَّحْمَنِ الرَّحِيمِ', ## 'reciter': 'abdulsamad' } # Data Fields # audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, # and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"]. # sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. # Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should # always be preferred over dataset["audio"][0]. # text: The transcription of the audio file. # duration: The duration of the audio file. # reciter: The reciter of the verses. ## Data Splits ## Train Test ## 4000 1500