|
--- |
|
tags: |
|
- audio |
|
|
|
configs: |
|
- config_name: test |
|
data_files: "data/test.parquet" |
|
- config_name: small_test |
|
data_files: "data/small_test.parquet" |
|
--- |
|
### About dataset |
|
It is a dataset of multispeaker speech with noise |
|
each sample is at most 30 seconds |
|
|
|
### Loading script |
|
``` |
|
>>> data_files = {"train": "data/<your_subset>.parquet"} |
|
>>> data = load_dataset("Zarakun/speakers_ua_test", data_files=data_files) |
|
>>> data |
|
DatasetDict({ |
|
test: Dataset({ |
|
features: ['num_speakers', 'utter', 'audio'], |
|
num_rows: <some_number> |
|
}) |
|
}) |
|
``` |
|
|
|
### Dataset structure |
|
Every example has the following: |
|
**num_speakers** - the number of speakers |
|
**utter** - list of utterences data |
|
**audio** - the waveform of the audio |
|
|
|
Each entry in the **utter** is a dict and has the following structure: |
|
**start** - the starting position in **audio** of the speaker audio |
|
**end** - the ending position in **audio** of the speaker audio |
|
**file_id** - the identificator of the speaker |
|
**sentence** - the transcription |
|
**rate** - has to be the same across all examples in the dataset, if not something bad happened |