Any benchmarks on this?

#1
by eabdullin - opened

Any benchmarks on this?

Owner

Not yet, I am currently evaluating my model.The evaluation results will be published soon.

not so good, output, strugggles to get the context, i got better results with llama 3 8b 262k instruct model gguf, and codeqwn 7b gguf model, both Q8s, llama.cpp used, on laptop with 16gb ram, no dedicated gpu, onlu cpu inference

Sign up or log in to comment