LibriSpeech-Finetuning for VALL-E
Included is a dataset I've prepared for training with my fork of a VALL-E implementation, sourced from LibriSpeech-Finetuning.
> What makes this different?
I've trimmed them down to better train against them, as too large of a piece of data will increase VRAM use drastically:
- I re-transcribed using m-bain/WhisperX's large-v2 model and using the VAD filter to get near-perfect timestamps.
- I then bias the start by -0.05 seconds, and the ends by 0.05 seconds).
- very short segments are merged with preceding ones to avoid fragmenting too much
- the source audio is then sliced according to each segment, and each segment gets phonemized using bootphon/phonemizer (espeak backend).
- finally, the sliced audio is quantized using Encodec, for VALL-E's use.
This will help alleviate problems from the default max_phoneme
length ignoring a large chunk of the dataset, and relatively evenly distributing lengths.