wizardcoder-6.7b
Id love to see wizardcoder based on deepseekcoder-6.7b-instruct.
Or better yet magicoder 6.7b since that was trained on top of deepseek 6.7b
true, magicoder 6.7b is very good model and it's gguf model is not working. it will be excellent if wizardcoder finetune magicoder 6.7b further.
true, magicoder 6.7b is very good model and it's gguf model is not working. it will be excellent if wizardcoder finetune magicoder 6.7b further.
I have a 7b gguf magic coder and its working, but not as well as deepseek seems to be doing, at least for python, my only use case... ( specifically -> magicoder-s-cl-7b.Q6_K.gguf )
small request, recently, Julia language 1.10 version released and brings new features. most of llama models trained data on julia 1.7 version, 1.8 version.
Please include this latest version of julia in your datasets, "https://raw.githubusercontent.com/JuliaLang/docs.julialang.org/assets/julia-1.10.0.pdf" , and https://docs.julialang.org/en/v1/, https://github.com/JuliaLang/julia.
due to this reason, no llama , deepseek models performs best for julia code generations.
Thank you.
@WizardLM
@rombodawg
@Nurb432
Yeah, I second the suggestion to have a wizardcoder 6.7B based on the Deepseek-coder 6.7B-instruct to see if you can squeeze anymore out of the 6.7B. The strange thing is, UIUC's magicoder which was also based on Deepseek-coder 6.7B won't run in llama.cpp.