deepseek-coder-7x8bMOE-instruct
I think it would be really amazing if you guys took the Mixtral model and trained it on your coding data. It would probably code as good or better than the 33b-instruct coder you already have, and it would as good on other tasks that your coding models struggle in that the non coding models are good at. Plus it would run as fast as a 14b parameter model despite being over 40b parameters large because of the nature of the Mixtral architecture.
Idk if you guys are able to or not because of politics and company policies and all that. But you should at least seriously consider it. Because you could make the best Mixtral model with your datasets so idk maybe think about the benefits for your investors and income it could make.
Have a good day!
- Rombodawg
Actually, our next version of the Coder model is based on MoE architecture. @rombodawg
I really look forward to that model. I tried to make one myself however the model required further training which i was not able to do. So i very much look forward to your model. And i really hope you make a decent sized coding model thats very capable. a 16b MoE-coder that only performs as good as a 7b-coder model wouldnt be very usefull if im being honest. But thats just my opinion