Spaces:
Sleeping
Sleeping
HEAD_TEXT = """ | |
Based on the CRUXEVAL-X benchmark, we evaluated the executing and reasoning ability of different LLMs in 19 different programing languages. | |
More details about how to evalute the LLM are available in the [CRUXEVAL-X GitHub repository](https://github.com/CRUXEVAL-X/cruxeval-x). For a complete description of CRUXEVAL-X benchmark and related experimental analysis, please refer to the paper: [CRUXEval-X: A Benchmark for Multilingual Code Reasoning, Understanding and Execution](https://arxiv.org/abs/2408.13001). [![](https://img.shields.io/badge/arXiv-2408.13001-b31b1b.svg)](https://arxiv.org/abs/2408.13001) | |
**_Latest News_** π₯ | |
- [24/08/26] We release our CRUXEVAL-X benchmark, leaderboard and paper. | |
""" | |
ABOUT_TEXT = """# What is CRUXEVAL-X benchmark? | |
CRUXEVAL-X is a multilingual code reasoning, understanding and execution benchmark that focuses on code reasoning ability in different languages. | |
Its goal is to evaluate LLM's code reasoning (given input, reasoning output; and given output, reasoning input) ability. | |
# How to evaluate? | |
To facilitate evaluation on the CRUXEVAL-X benchmark, we provide the evaluation data and easy-to-use evaluation scripts in our [CRUXEVAL-X GitHub repository](https://github.com/CRUXEVAL-X/cruxeval-x). | |
Additionally, factors involving execution-based evaluation are conducted in a virtual environment to ensure evaluation security. | |
# Contact | |
If you have any questions, feel free to reach out to us at [[email protected]](mailto:[email protected]). | |
""" | |
CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" | |
CITATION_BUTTON_TEXT = r"""@misc{xu2024cruxevalxbenchmarkmultilingualcode, | |
title={CRUXEval-X: A Benchmark for Multilingual Code Reasoning, Understanding and Execution}, | |
author={Ruiyang Xu and Jialun Cao and Yaojie Lu and Hongyu Lin and Xianpei Han and Ben He and Shing-Chi Cheung and Le Sun}, | |
year={2024}, | |
eprint={2408.13001}, | |
archivePrefix={arXiv}, | |
primaryClass={cs.AI}, | |
url={https://arxiv.org/abs/2408.13001}, | |
} | |
""" | |
ACKNOWLEDGEMENT_TEXT = """ | |
Inspired from the [π€ Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). | |
""" | |
NOTES_TEXT = """ | |
**Notes:** | |
- Evaluate using pass@1 as the evaluation metric. | |
- `Size` here is the amount of activated model weight during inference. | |
- `Average` denotes the average results of 19 different languages in a specific task. | |
- you can choose differt tasks in `β¬ Tasks`, `input reasoning` denotes given output, reasoning input, `output reasoning` denotes given input, reasoning output. | |
- `β¬ Languages` can choose languages you want to show in the leaderboard. | |
- For more explanation check the π About section. | |
""" |