Spaces:
Running
on
L4
Running
on
L4
whisper fine tuning question
#115
by
spawar
- opened
Can I finetune the model on two different datasets of the same language(by the same language, I mean to keep the pre-trained weights of one language but, after fine-tuning, save both models like they are different languages) and access them by changing some key, like used for languages?
One dataset has significantly degraded speech data; the other is added with noise, on which whisper already gives good results. So, I am afraid that the new data might affect the original model.