Benchmark and evaluation
Hi, congrats on your work!
The model card says that this is a SOTA model, but is unclear on what benchmark this score is achieved.
Is this evaluation accessible?
Thank you! You're right, the model was SOTA when it was published. Now, as per MTBench evaluations, Clibrain's best one is LINCE Mistral.
Thanks for your answer! What I was actually asking about is what benchmark was used to evaluate the model, as this is unclear from the model card.
Was it MT-Bench? In that case, what was the question set to evaluate the model? Was a direct translation of the available ones or is it a custom one?
Also, from your response it implies that "SOTA" is used in the context of only models published by Clibrain, is that correct? Or is this in comparison with other spanish LLM's?