Spaces:
Running
on
Zero
Inquiry on Fine-Tuning Details and Model Similarities for MelodyFlow
Hi there,
Thank you for making MelodyFlow available for the community! I am currently exploring its capabilities and am highly interested in learning more about the model.
I have a couple of questions:
1. Are there any updates or detailed guidelines on how to fine-tune the MelodyFlow model? Fine-tuning information would be greatly beneficial for adapting the model to specific datasets.
2. Could you confirm if MelodyFlow fine-tuning process shares similarities with other models like Audiocraft or other models in the same family? If so, are there any key architectural or functionality overlaps that would be helpful to know?
Any additional information or resources on these topics would be greatly appreciated. Thank you for your time and effort in advancing AI music generation!
Looking forward to your response!