Models added here will be automatically evaluated on the 🤗 cluster. Don’t forget to read the FAQ and the About documentation pages for more information!
Make sure you can load your model and tokenizer using AutoClasses:
from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("your model name", revision=revision)
model = AutoModel.from_pretrained("your model name", revision=revision)
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision)
If this step fails, follow the error messages to debug your model before submitting it. It’s likely your model has been improperly uploaded.
Notes:
use_remote_code=True
, we do not support this option yet but are working on adding it. Stay posted!When we add extra information about models to the leaderboard, it will be automatically taken from the model card.
Not all models are converted properly from float16
to bfloat16
, and selecting the wrong precision can sometimes cause evaluation errors (as loading a bf16
model in fp16
can sometimes generate NaNs, depending on the weight range).
Our submission system implements a two-tier check to determine if a model can be automatically evaluated:
Absolute Size Limit for High-Precision Models:
float16
and bfloat16
precisionsPrecision-Adjusted Size Limit:
8bit
: 2x (max 280B)4bit
: 4x (max 560B)GPTQ
: Varies based on quantization bitsModels exceeding these limits cannot be automatically evaluated. Consider using a lower precision for larger models / open a discussion on Open LLM Leaderboard. If there’s enough interest from the community, we’ll do a manual evaluation
When submitting a model, you can choose whether to evaluate it using a chat template. The chat template toggle activates automatically for chat models.