--- annotations_creators: - crowdsourced language_creators: - crowdsourced multilinguality: - multilingual language: - sn - ln license: cc-by-4.0 --- # Dataset Card for the image text and voice dataset ## Dataset Description ### Dataset Summary This dataset consists of a unique JPEG image, a corresponding audio WAV file describing the image, and when available, the transcription of the audio file. The Shona dataset has a total of 574.16 hours of audio; out of which, 100 hours have transcriptions and the remaining 474.16 hours do not have a corresponding transcribed text. For Lingala, the dataset is 517.13 hours long, with 100.98 hours transcribed and 416.15 hours with no transcriptions. ### Languages ``` Shona, Lingala ``` ## How to use The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. To download the config, specify the language code (i.e., "sn" for shona, and "ln" for lingala): ```python from datasets import load_dataset data = load_dataset("DigitalUmuganda/AfriVoice", "sn") ``` ## Dataset Structure ### Data Instances ```python {'creator': 'digital_umuganda', 'project_name': 'shona_data_collection', 'speaker_id': '2Eud8lyLlsMcciYhmlkwVRtBwi82', 'audio_path': '/root/.cache/huggingface/datasets/downloads/extracted/9347eb035e3ae38aaf793efa152ba1c93a4336471afce2bbd00ac8c0f67e9066/small_data/audio/I7L1YJVKIRL4.wav', 'image_path': '/root/.cache/huggingface/datasets/downloads/extracted/9347eb035e3ae38aaf793efa152ba1c93a4336471afce2bbd00ac8c0f67e9066/small_data/image/I7L1YJVKIRL4.jpeg', 'transcription': 'Varume vaviri vari kukandirana bhora. Varume ava vakapfeka zvipika zvine ruvara rutema neruchena. Zvikabudura zvine ruvara rutema. Bhora ravanokandirana rine ruvara rweyero neruchena nerwebhuruu. Vari kutambira munhandare ine ivhu. Kumashure kwavo kwakagara vanhu.', 'locale': 'sn_ZW', 'gender': 'Female', 'age': ' ', 'year': '2023'} ``` ### Data Fields `creator` (`string`): An id for which client (voice) made the recording `image_path` (`string`): The path to the audio file `path_audio` (`string`): The path to the image file `transcription` (`string`): The sentence the user was prompted to speak `age` (`string`): The age of the speaker `gender` (`string`): The gender of the speaker `project_name` (`string`): Name of the project `locale` (`string`): The locale of the speaker `year` (`string`): Year of recording ### Data Splits Currently to data not yet split ie to access you must precise the train option, however the dataset will be split into train, dev, and test at some point in the future.