How to submit models on the Open LLM Leaderboard

Models added here will be automatically evaluated on the 🤗 cluster. Don’t forget to read the FAQ and the About documentation pages for more information!

First steps before submitting a model

1. Ensure Model and Tokenizer Loading:

Make sure you can load your model and tokenizer using AutoClasses:

from transformers import AutoConfig, AutoModel, AutoTokenizer
config = AutoConfig.from_pretrained("your model name", revision=revision)
model = AutoModel.from_pretrained("your model name", revision=revision)
tokenizer = AutoTokenizer.from_pretrained("your model name", revision=revision)

If this step fails, follow the error messages to debug your model before submitting it. It’s likely your model has been improperly uploaded.

Notes:

2. Fill Up Your Model Card:

When we add extra information about models to the leaderboard, it will be automatically taken from the model card.

3. Select the Correct Precision:

Not all models are converted properly from float16 to bfloat16, and selecting the wrong precision can sometimes cause evaluation errors (as loading a bf16 model in fp16 can sometimes generate NaNs, depending on the weight range).

Model Size and Precision Limits:

Our submission system implements a two-tier check to determine if a model can be automatically evaluated:

  1. Absolute Size Limit for High-Precision Models:

  2. Precision-Adjusted Size Limit:

Models exceeding these limits cannot be automatically evaluated. Consider using a lower precision for larger models / open a discussion on Open LLM Leaderboard. If there’s enough interest from the community, we’ll do a manual evaluation

4. Chat Template Toggle:

When submitting a model, you can choose whether to evaluate it using a chat template. The chat template toggle activates automatically for chat models.

Model Types

< > Update on GitHub