Datasets:
metadata
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- vi
pretty_name: VietMed labeled set
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: Speaker ID
dtype: string
splits:
- name: train
num_bytes: 58513440.578
num_examples: 2858
- name: validation
num_bytes: 56714850.712
num_examples: 2912
- name: test
num_bytes: 70051704.606
num_examples: 3437
download_size: 183555285
dataset_size: 185279995.896
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
unofficial mirror of VietMed (Vietnamese speech data in medical domain) labeled set
official announcement: https://arxiv.org/abs/2404.05659
official download: https://huggingface.co/datasets/leduckhai/VietMed
this repo contains the labeled set: 9.2k samples
i also gather the metadata: see info.csv
my extraction code: https://github.com/phineas-pta/fine-tune-whisper-vi/blob/main/misc/vietmed-labeled.py
need to do: check misspelling, restore foreign words phonetised to vietnamese
usage with HuggingFace:
# pip install -q "datasets[audio]"
from datasets import load_dataset
from huggingface_hub import hf_hub_download
from pandas import read_csv
repo_id = "doof-ferb/VietMed_labeled"
dataset = load_dataset(repo_id, split="train", streaming=True)
info_file = hf_hub_download(repo_id=repo_id, filename="info.csv", repo_type="dataset")
info_dict = read_csv(info_file, index_col=0).to_dict("index")
def merge_info(batch):
meta = info_dict.get(batch["Speaker ID"], "")
if meta != "":
batch["Recording Condition"] = meta["Recording Condition"]
batch["ICD-10 Code"] = meta["ICD-10 Code"]
batch["Role"] = meta["Role"]
batch["Gender"] = meta["Gender"]
batch["Accent"] = meta["Accent"]
else:
batch["Recording Condition"] = ""
batch["ICD-10 Code"] = ""
batch["Role"] = ""
batch["Gender"] = ""
batch["Accent"] = ""
return batch
dataset = dataset.map(merge_info)