How do you finetune mms-1b on Hugging Face?
I am trying to use the same pipeline I used to fine-tune facebook/wav2vec2-xls-r-300m to train this model (facebook/mms-1b) since both have the same model class, Wav2Vec2ForCTC. However, I am running into this error in the backward pass step when I try to fine-tune:
How do I fine-tune mms-1b? Thank you.
Thank you for adding these resources!
@sanchit-gandhi Why is the mms-1b-all model in the blog finetuned on Common Voice? I thought the mms-1b-all has already been fine-tuned on Common Voice (along with MMS-lab + FLEURS + VP + MLS ). Does this have to do with the language-specific adapter weights?
How can I add a different language to MMS?
Here only the tokenizer is taking the target-lang argument, so, if a language is not there on MMS, can we get the vocab json file of the new language and start training the adapters? Will that going to add the new language to the model?