Model parameter count
Hello and good work!
I was wondering why you chose to stick with Meta's spelling error in naming the 33B model as 30B.
Aside from that, can't wait to try out the model. Thanks for your hard work.
This model is already widely known all across 7 sees as "30B" model.
Changing it for no other reason but for pure pedantry - would just add unnecessary disturbance in the force.
Llama is llama, warts and all.
It also has a 2048 context length, another bug hard coded into the architecture.
someday, there will be an open model that is better than llama, and won't share its flaws.
until then, we will live with them.
Thanks! This is the first model that I have found is really worth trying to use locally. Thanks for making it available.
Sorry friend, but does anyone know how to install this chat, I'm new to this.
it depends on your system; you will probably need the ggml or the 4-bit model.
You should start with getting llama.cpp or oobabooga working on your computer.
Well, I guess it makes sense. Everyone seems to know it as 30B because of a typo XD
Closing this since I got my answer.
I have an rtx 3090, I have oobabooga installed, but what parameters do I put? : wbits:4, groupsize:128, model_type:flame. Is what I'm going to do right or should I add something else?
parameters??
Parameters! π
i was running this oobabooga but its not working @ehartford any workaround ?