Datasets:
You need to agree to share your contact information to access this dataset
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
Original datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:
- Creative Commons 0 which applies to Common Voice
- Creative Commons By Attribution Share Alike 4.0, which applies to Clarin Cyfry, Azon acoustic speech resources corpus.
- Creative Commons By Attribution 3.0, which applies to CLARIN Mobile database, CLARIN Studio database, PELCRA Spelling and Numbers Voice Database and FLEURS dataset
- Creative Commons By Attribution 4.0, which applies to Multilingual Librispeech and Poly AI Minds 14
- Proprietiary License of Munich AI Labs dataset
- Public domain mark, which applies to PWR datasets
To use selected dataset, you also need to fill in the access forms on the specific datasets pages: - Common Voice: https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0
Log in or Sign Up to review the conditions and access this dataset content.
Dataset Card for Polish ASR BIGOS corpora
Dataset Summary
The BIGOS (Benchmark Intended Grouping of Open Speech) corpora aims at simplifying the access and use of publicly available ASR speech datasets for Polish.
Supported Tasks and Leaderboards
BIGOS V2 applications:
- Evaluation of 10 commercial and 15 freely available systems for Polish - paper
- Interactive Polish ASR leaderboard
- Open Polish ASR challenge PolEval using BIGOS V2 and PELCRA for BIGOS datasets.
Note, BIGOS V1 was used to evaluate 3 commercial and 5 freely available systems (paper).
Languages
Polish
Dataset Structure
The datasets consist of audio recordings in the WAV format with corresponding metadata.
The audio and metadata can be used in a raw format (TSV) or via the Hugging Face datasets library.
References for the test split will only become available after the completion of the 2024 PolEval challenge.
Data Instances
The train set consists of 82 025 samples. The dev set consists of 14 254 samples The test set consists of 14 993 samples.
Data Fields
Available fields:
audioname
- file identifiersplit
- test, validation or train splitdataset
- source dataset identifierref_orig
- original transcription of audio fileaudio
- HF dataset object with binary representation of audio filesamplingrate_orig
- sampling rate of the original recordingsampling_rate
- sampling rate of recording in the releaseaudio_duration_samples
- duration of recordings in samplesaudio_duration_seconds
- duration of recordings in secondsaudiopath_bigos
- relative filepath to audio file extracted from tar.gz archiveaudiopath_local
- absolute filepath to audio file extracted with the build scriptspeaker_gender
- gender (sex) of the speaker extracted from the source meta-data (N/A if not available)speaker_age
- age group of the speaker (in CommonVoice format) extracted from the source (N/A if not available)utt_length_words
- length of the utterance in wordsutt_length_chars
- length of the utterance in charactersspeech_rate_words
- ratio of words to recording duration.speech_rate_chars
- ratio of characters to recording duration.
Data Splits
Train split contains recordings intendend for training. Validation split contains recordings for validation during training procedure. Test split contains recordings intended for evaluation only. References for test split are not available until the completion of 2024 PolEval challenge.
Subset | train | validation | test |
---|---|---|---|
fair-mls-20 | 25 042 | 511 | 519 |
google-fleurs-22 | 2 841 | 338 | 758 |
mailabs-corpus_librivox-19 | 11 834 | 1 527 | 1 501 |
mozilla-common_voice_15-23 | 19 119 | 8 895 | 8 896 |
pjatk-clarin_studio-15 | 10 999 | 1 407 | 1 404 |
pjatk-clarin_mobile-15 | 2 861 | 242 | 392 |
polyai-minds14-21 | 462 | 47 | 53 |
pwr-maleset-unk | 3 783 | 478 | 477 |
pwr-shortwords-unk | 761 | 86 | 92 |
pwr-viu-unk | 2 146 | 290 | 267 |
pwr-azon_read-20 | 1 820 | 382 | 586 |
pwr-azon_spont-20 | 357 | 51 | 48 |
Dataset Creation
Curation Rationale
Polish ASR Speech Data Catalog was used to identify suitable datasets which can be repurposed and included in the BIGOS corpora.
The following mandatory criteria were considered:
- Dataset must be downloadable.
- The license must allow for free, noncommercial use.
- Transcriptions must be available and align with the recordings.
- The sampling rate of audio recordings must be at least 8 kHz.
- Audio encoding using a minimum of 16 bits per sample.
Recordings which either lacked transcriptions or were too short to be useful for training or evaluation were removed during curation.
Source Data
12 datasets that meet the criteria were chosen as sources for the BIGOS dataset.
- The Common Voice dataset version 15 (mozilla-common_voice_15-23)
- The Multilingual LibriSpeech (MLS) dataset (fair-mls-20)
- The Clarin Studio Corpus (pjatk-clarin_studio-15)
- The Clarin Mobile Corpus (pjatk-clarin_mobile-15)
- The Jerzy Sas PWR datasets from Politechnika Wrocławska (pwr-viu-unk, pwr-shortwords-unk, pwr-maleset-unk). More info here
- The Munich-AI Labs Speech corpus (mailabs-corpus-librivox-19)
- The AZON Read and Spontaneous Speech Corpora (pwr-azon_spont-20, pwr-azon_read-20) More info here
- The Google FLEURS dataset (google-fleurs-22)
- The PolyAI minds14 dataset (polyai-minds14-21)
Initial Data Collection and Normalization
Source text and audio files were extracted and encoded in a unified format.
Dataset-specific transcription norms are preserved, including punctuation and casing.
In case of original dataset does not have test, dev, train splits provided, the splits were generated pseudorandomly during curation.
Who are the source language producers?
- Clarin corpora - Polish Japanese Academy of Technology
- Common Voice - Mozilla foundation
- Multlingual librispeech - Facebook AI research lab
- Jerzy Sas and AZON datasets - Politechnika Wrocławska
- Google - FLEURS
- PolyAI London - Minds14
Please refer to the BIGOS V1 paper for more details.
If you use BIGOS please cite the data curator as well as the original authors:
@misc {amu_cai_pl_asr_bigos_v2,
author = { {Michał Junczyk} },
title = { pl-asr-bigos-v2 (Revision 37cc976) },
year = 2024,
url = { https://huggingface.co/datasets/amu-cai/pl-asr-bigos-v2 },
doi = { 10.57967/hf/2353 },
publisher = { Hugging Face }
}
@inproceedings{Ardila2020,
abstract = {The Common Voice corpus is a massively-multilingual collection of transcribed speech intended for speech technology research and development. Common Voice is designed for Automatic Speech Recognition purposes but can be useful in other domains (e.g. language identification). To achieve scale and sustainability, the Common Voice project employs crowdsourcing for both data collection and data validation. The most recent release includes 29 languages, and as of November 2019 there are a total of 38 languages collecting data. Over 50,000 individuals have participated so far, resulting in 2,500 hours of collected audio. To our knowledge this is the largest audio corpus in the public domain for speech recognition, both in terms of number of hours and number of languages. As an example use case for Common Voice, we present speech recognition experiments using Mozilla's DeepSpeech Speech-to-Text toolkit. By applying transfer learning from a source English model, we find an average Character Error Rate improvement of 5.99 5.48 for twelve target languages (German, French, Italian, Turkish, Catalan, Slovenian, Welsh, Irish, Breton, Tatar, Chuvash, and Kabyle). For most of these languages, these are the first ever published results on end-to-end Automatic Speech Recognition.},
author = {Rosana Ardila and Megan Branson and Kelly Davis and Michael Kohler and Josh Meyer and Michael Henretty and Reuben Morais and Lindsay Saunders and Francis Tyers and Gregor Weber},
city = {Marseille, France},
editor = {Nicoletta Calzolari and Frédéric Béchet and Philippe Blache and Khalid Choukri and Christopher Cieri and Thierry Declerck and Sara Goggi and Hitoshi Isahara and Bente Maegaard and Joseph Mariani and Hélène Mazo and Asuncion Moreno and Jan Odijk and Stelios Piperidis},
isbn = {979-10-95546-34-4},
booktitle = {Proceedings of the Twelfth Language Resources and Evaluation Conference},
month = {5},
pages = {4218-4222},
publisher = {European Language Resources Association},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
url = {https://aclanthology.org/2020.lrec-1.520},
year = {2020},
}
@article{Pratap2020,
abstract = {This paper introduces Multilingual LibriSpeech (MLS) dataset, a large multilingual corpus suitable for speech research. The dataset is derived from read audiobooks from LibriVox and consists of 8 languages, including about 44.5K hours of English and a total of about 6K hours for other languages. Additionally, we provide Language Models (LM) and baseline Automatic Speech Recognition (ASR) models and for all the languages in our dataset. We believe such a large transcribed dataset will open new avenues in ASR and Text-To-Speech (TTS) research. The dataset will be made freely available for anyone at http://www.openslr.org.},
author = {Vineel Pratap and Qiantong Xu and Anuroop Sriram and Gabriel Synnaeve and Ronan Collobert},
doi = {10.21437/Interspeech.2020-2826},
keywords = {Index Terms,multilingual,speech recognition},
month = {12},
title = {MLS: A Large-Scale Multilingual Dataset for Speech Research},
url = {http://arxiv.org/abs/2012.03411 http://dx.doi.org/10.21437/Interspeech.2020-2826},
year = {2020},
}
@article{Conneau2022,
abstract = {We introduce FLEURS, the Few-shot Learning Evaluation of Universal Representations of Speech benchmark. FLEURS is an n-way parallel speech dataset in 102 languages built on top of the machine translation FLoRes-101 benchmark, with approximately 12 hours of speech supervision per language. FLEURS can be used for a variety of speech tasks, including Automatic Speech Recognition (ASR), Speech Language Identification (Speech LangID), Translation and Retrieval. In this paper, we provide baselines for the tasks based on multilingual pre-trained models like mSLAM. The goal of FLEURS is to enable speech technology in more languages and catalyze research in low-resource speech understanding.},
author = {Alexis Conneau and Min Ma and Simran Khanuja and Yu Zhang and Vera Axelrod and Siddharth Dalmia and Jason Riesa and Clara Rivera and Ankur Bapna},
month = {5},
title = {FLEURS: Few-shot Learning Evaluation of Universal Representations of Speech},
year = {2022},
}
@misc{Korzinek2016,
author = {Danijel Koržinek and Krzysztof Marasek and Łukasz Brocki},
city = {Aix-en-Provence},
month = {10},
title = {Polish Read Speech Corpus for Speech Tools and Services},
url = {http://clarin-pl.eu},
year = {2016},
}
@article{Gerz2021,
abstract = {We present a systematic study on multilingual and cross-lingual intent detection from spoken data. The study leverages a new resource put forth in this work, termed MInDS-14, a first training and evaluation resource for the intent detection task with spoken data. It covers 14 intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties. Our key results indicate that combining machine translation models with state-of-the-art multilingual sentence encoders (e.g., LaBSE) can yield strong intent detectors in the majority of target languages covered in MInDS-14, and offer comparative analyses across different axes: e.g., zero-shot versus few-shot learning, translation direction, and impact of speech recognition. We see this work as an important step towards more inclusive development and evaluation of multilingual intent detectors from spoken data, in a much wider spectrum of languages compared to prior work.},
author = {Daniela Gerz and Pei-Hao Su and Razvan Kusztos and Avishek Mondal and Michał Lis and Eshan Singhal and Nikola Mrkšić and Tsung-Hsien Wen and Ivan Vulić},
month = {4},
title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
year = {2021},
}
Annotations
Annotation process
Current release contains original transcriptions. Manual transcriptions of subsets and release of diagnostic dataset are planned for subsequent releases.
Who are the annotators?
Depends on the source dataset.
Personal and Sensitive Information
This corpus does not contain PII or Sensitive Information. All IDs pf speakers are anonymized.
Considerations for Using the Data
Social Impact of Dataset
To be updated.
Discussion of Biases
To be updated.
Other Known Limitations
The dataset in the initial release contains only a subset of recordings from original datasets.
Additional Information
Dataset Curators
Original authors of the source datasets - please refer to source-data for details.
Michał Junczyk ([email protected]) - curator of BIGOS corpora.
Licensing Information
The BIGOS corpora is available under Creative Commons By Attribution Share Alike 4.0 license.
Original datasets used for curation of BIGOS have specific terms of usage that must be understood and agreed to before use. Below are the links to the license terms and datasets the specific license type applies to:
- Creative Commons 0 which applies to Common Voice
- Creative Commons By Attribution Share Alike 4.0, which applies to Clarin Cyfry, Azon acoustic speech resources corpus.
- Creative Commons By Attribution 3.0, which applies to CLARIN Mobile database, CLARIN Studio database, PELCRA Spelling and Numbers Voice Database and FLEURS dataset
- Creative Commons By Attribution 4.0, which applies to Multilingual Librispeech and Poly AI Minds 14
- Proprietiary License of Munich AI Labs dataset
- Public domain mark, which applies to PWR datasets
Citation Information
Please cite using Bibtex
Contributions
Thanks to @goodmike31 for adding this dataset.
- Downloads last month
- 180