HEAD_TEXT = """ Based on the DomainEval benchmark, we evaluate code generation ability of different LLMs across multiple domains. Leaderboard on GitHub: [DomainEval Leaderboard on GitHub](https://domaineval.github.io/) More details about how to evaluate the LLM are available in the [DomainEval GitHub repository](https://github.com/domaineval/DomainEval). For a complete description of DomainEval benchmark and related experimental analysis, please refer to the paper: [DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation](https://arxiv.org/abs/2408.13204). [![](https://img.shields.io/badge/arXiv-2408.13204-b31b1b.svg)](https://arxiv.org/abs/2408.13204) **_Latest News_** 🔥 - [24/08/26] We release our DomainEval benchmark, leaderboard and paper. """ CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" CITATION_BUTTON_TEXT = r"""@misc{zhu2024domainevalautoconstructedbenchmarkmultidomain, title={DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation}, author={Qiming Zhu and Jialun Cao and Yaojie Lu and Hongyu Lin and Xianpei Han and Le Sun and Shing-Chi Cheung}, year={2024}, eprint={2408.13204}, archivePrefix={arXiv}, primaryClass={cs.AI}, url={https://arxiv.org/abs/2408.13204}, } """ ACKNOWLEDGEMENT_TEXT = """ Inspired from the [🤗 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). """ NOTES_TEXT = """ **Notes:** - Evaluate using pass@k as the evaluation metric. - `Mean` denotes the macro average results of pass@k across 6 different domains. - `Std` denotes the standard deviation of pass@k across 6 different domains. - You can choose differt pass@k in `⏬ Pass@k`. - `⏬ Domains` can choose domains you want to show in the leaderboard. """