This page explains how scores are normalized on the Open LLM Leaderboard for the six presented benchmarks. We can categorize all tasks into those with subtasks, those without subtasks, and generative evaluation.
Note: Click the button above to explore the scores normalization process in an interactive notebook (make a copy to edit).
Normalization is the process of adjusting values measured on different scales to a common scale, making it possible to compare scores across different tasks. For the Open LLM Leaderboard, we normalize scores to:
The basic normalization process involves two steps:
We use the following normalization function:
def normalize_within_range(value, lower_bound, higher_bound):
return (value - lower_bound) / (higher_bound - lower_bound)
For tasks without subtasks (e.g., GPQA, MMLU-PRO), the normalization process is straightforward:
GPQA has 4 num_choices
, so the lower bound is 0.25 (1/num_choices
= 1/4 = 0.25).
raw_score = 0.6 # Example raw score
lower_bound = 0.25
higher_bound = 1.0
if raw_score < lower_bound:
normalized_score = 0
else:
normalized_score = normalize_within_range(raw_score, lower_bound, higher_bound) * 100
print(f"Normalized GPQA score: {normalized_score:.2f}")
# Output: Normalized GPQA score: 46.67
For tasks with subtasks (e.g., MUSR, BBH), we follow these steps:
MUSR has three subtasks with different numbers of choices:
subtasks = [
{"name": "murder_mysteries", "raw_score": 0.7, "lower_bound": 0.5},
{"name": "object_placement", "raw_score": 0.4, "lower_bound": 0.2},
{"name": "team_allocation", "raw_score": 0.6, "lower_bound": 0.333}
]
normalized_scores = []
for subtask in subtasks:
if subtask["raw_score"] < subtask["lower_bound"]:
normalized_score = 0
else:
normalized_score = normalize_within_range(
subtask["raw_score"],
subtask["lower_bound"],
1.0
) * 100
normalized_scores.append(normalized_score)
print(f"{subtask['name']} normalized score: {normalized_score:.2f}")
overall_normalized_score = sum(normalized_scores) / len(normalized_scores)
print(f"Overall normalized MUSR score: {overall_normalized_score:.2f}")
# Output:
# murder_mysteries normalized score: 40.00
# object_placement normalized score: 25.00
# team_allocation normalized score: 40.00
# Overall normalized MUSR score: 35.00
Generative evaluations like MATH and IFEval require a different approach:
ifeval_inst
), we use strict accuracy.ifeval_prompt
), we also use strict accuracy.This approach ensures that even for generative tasks, we can provide normalized scores that are comparable across different evaluations.
For more detailed information and examples, please refer to our blog post on scores normalization.
If you have any questions or need clarification, please start a new discussion on the Leaderboard page.
< > Update on GitHub