Open LLM Leaderboard v1

Evaluating and comparing LLMs is hard. Our RLHF team realized this a year ago when they wanted to reproduce and compare results from several published models. It was a nearly impossible task: scores in papers or marketing releases were given without any reproducible code, sometimes doubtful, but in most cases, just using optimized prompts or evaluation setup to give the best chances to the models. They therefore decided to create a place where reference models would be evaluated in the exact same setup (same questions, asked in the same order, etc.) to gather completely reproducible and comparable results; and that’s how the Open LLM Leaderboard was born!

Following a series of highly visible model releases, it became a widely used resource in the ML community and beyond, visited by more than 2 million unique people over the last 10 months.

Around 300,000 community members used and collaborated on it monthly through submissions and discussions, usually to:

In June 2024, we archived it, and it was replaced by a newer version, but below, you’ll find all relevant information about it!

Tasks

📈 We evaluated models on 6 key benchmarks using the Eleuther AI Language Model Evaluation Harness , a unified framework to test generative language models on a large number of different evaluation tasks.

For all these evaluations, a higher score is a better score.

We chose these benchmarks as they test a variety of reasoning and general knowledge across a wide variety of fields in 0-shot and few-shot settings.

Results

You can find:

Reproducibility

To reproduce our results, you could run the following command, using this version of the Eleuther AI Harness:

python main.py --model=hf-causal-experimental \
    --model_args="pretrained=<your_model>,use_accelerate=True,revision=<your_model_revision>" \
    --tasks=<task_list> \
    --num_fewshot=<n_few_shot> \
    --batch_size=1 \
    --output_path=<output_path>

Note: We evaluated all models on a single node of 8 H100s, so the global batch size was 8 for each evaluation. If you don’t use parallelism, adapt your batch size to fit. You can expect results to vary slightly for different batch sizes because of padding.

The tasks and few shots parameters are:

Blogs

During the life of the leaderboard, we wrote 2 blogs that you can find here and here

< > Update on GitHub