from dataclasses import dataclass from enum import Enum @dataclass class Task: benchmark: str metric: str col_name: str # Init: to update with your specific keys class Tasks(Enum): # task_key in the json file, metric_key in the json file, name to display in the leaderboard task0 = Task("logiqa", "delta_abs", "LogiQA Δ") task1 = Task("logiqa2", "delta_abs", "LogiQA2 Δ") task2 = Task("lsat-ar", "delta_abs", "LSAT-ar Δ") task3 = Task("lsat-lr", "delta_abs", "LSAT-lr Δ") task4 = Task("lsat-rc", "delta_abs", "LSAT-rc Δ") #METRICS = list(set([task.value.metric for task in Tasks])) # Your leaderboard name TITLE = """

/\/   Open CoT Leaderboard

""" # What does your leaderboard evaluate? INTRODUCTION_TEXT = """ The `/\/` Open CoT Leaderboard tracks the reasoning skills of LLMs, measured as their ability to generate **effective chain-of-thought reasoning traces**. The leaderboard reports **accuracy gains** achieved by using CoT, i.e.: _accuracy gain Δ_ = _CoT accuracy_ — _baseline accuracy_. See the "About" tab for more details and motivation. """ # Which evaluations are you running? how can people reproduce what you have? LLM_BENCHMARKS_TEXT = f""" ## How it works (roughly) To assess the reasoning skill of a given `model`, we carry out the following steps for each `task` (test dataset) and different CoT `regimes`. (A CoT `regime` consists in a prompt chain and decoding parameters used to generate a reasoning trace.) 1. `model` generates CoT reasoning traces for all problems in the test dataset according to `regime`. 2. `model` answers the test dataset problems, we record the resulting _baseline accuracy_. 3. `model` answers the test dataset problems _with the reasoning traces appended_ to the prompt, we record the resulting _CoT accuracy_. 4. We compute the _accuracy gain Δ_ = _CoT accuracy_ — _baseline accuracy_ for the given `model`, `task`, and `regime`. Each `regime` yields a different _accuracy gain Δ_, and the leaderboard reports (for every `model`/`task`) the best Δ achieved by any regime. All models are evaluated against the same set of regimes. ## How is it different from other leaderboards? Performance leaderboards like the [🤗 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) or [YALL](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard) do a great job in ranking models according to task performance. Unlike these leaderboards, the `/\/` Open CoT Leaderboard assesses a model's ability to effectively reason about a `task`: ### 🤗 Open LLM Leaderboard * a. Can `model` solve `task`? * b. Metric: absolute accuracy. * c. Measures `task` performance. * d. Covers broad spectrum of `tasks`. ### `/\/` Open CoT Leaderboard * a. Can `model` do CoT to improve in `task`? * b. Metric: relative accuracy gain. * c. Measures ability to reason (about `task`). * d. Focuses on critical thinking `tasks`. ## Test dataset selection (`tasks`) The test dataset porblems in the CoT Leaderboard can be solved through clear thinking alone, no specific knowledge is required to do so. They are subsets of the [AGIEval benchmark](https://github.com/ruixiangcui/AGIEval) and re-published as [`logikon-bench`](logikon/logikon-bench). The `logiqa` dataset has been newly translated from Chinese to English. ## Reproducibility To learn more about the evaluation piepline and reproduce our results, check out the repository [cot-eval](https://github.com/logikon-ai/cot-eval). ## Acknowledgements We're grateful to community members for running evaluations and reporting results. To contribute, join us at [`cot-leaderboard`](https://huggingface.co/cot-leaderboard) organization. """ EVALUATION_QUEUE_TEXT = """ ## Some good practices before submitting a model ### 1) Make sure you can load your model and tokenizer with `vLLM`: ```python from vllm import LLM, SamplingParams prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] sampling_params = SamplingParams(temperature=0.8, top_p=0.95) llm = LLM(model="/") outputs = llm.generate(prompts, sampling_params) ``` If this step fails, follow the error messages to debug your model before submitting it. It's likely your model has been improperly uploaded. Note: make sure your model is public! ### 2) Convert your model weights to [safetensors](https://huggingface.co/docs/safetensors/index) It's a new format for storing weights which is safer and faster to load and use. It will also allow us to add the number of parameters of your model to the `Extended Viewer`! ### 3) Make sure your model has an open license! This is a leaderboard for Open LLMs, and we'd love for as many people as possible to know they can use your model 🤗 ### 4) Fill up your model card When we add extra information about models to the leaderboard, it will be automatically taken from the model card ## Your model is stuck in the pending queue? We're populating the Open CoT Leaderboard step by step. The idea is to grow a diverse and informative sample of the LLM space. Plus, with limited compute, we're currently prioritizing models that are popular, promising, and relatively small. """ CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" CITATION_BUTTON_TEXT = r""" Logikon AI Team. (2024). Open CoT Leaderboard. Retrieved from https://huggingface.co/spaces/logikon/open_cot_leaderboard """