StructEval: Deepen and Broaden Large Language Model Assessment via Structured Evaluation
Abstract
Evaluation is the baton for the development of large language models. Current evaluations typically employ a single-item assessment paradigm for each atomic test objective, which struggles to discern whether a model genuinely possesses the required capabilities or merely memorizes/guesses the answers to specific questions. To this end, we propose a novel evaluation framework referred to as StructEval. Starting from an atomic test objective, StructEval deepens and broadens the evaluation by conducting a structured assessment across multiple cognitive levels and critical concepts, and therefore offers a comprehensive, robust and consistent evaluation for LLMs. Experiments on three widely-used benchmarks demonstrate that StructEval serves as a reliable tool for resisting the risk of data contamination and reducing the interference of potential biases, thereby providing more reliable and consistent conclusions regarding model capabilities. Our framework also sheds light on the design of future principled and trustworthy LLM evaluation protocols.
Community
Starting from an atomic test objective, StructEval deepens and broadens the evaluation by conducting a structured assessment across multiple cognitive levels and critical concepts, and therefore offers a comprehensive, robust and consistent evaluation for LLMs.
- The repository provides easy-to-use scripts for both evaluating LLMs on existing StructEval benchmarks and generating new benchmarks based on StructEval framework: https://github.com/c-box/StructEval
- The leaderboard demonstrate evaluation performance of various LLMs on StructEval: https://huggingface.co/spaces/Bowieee/StructEval_leaderboard
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- UniGen: A Unified Framework for Textual Dataset Generation Using Large Language Models (2024)
- KGPA: Robustness Evaluation for Large Language Models via Cross-Domain Knowledge Graphs (2024)
- MMEvalPro: Calibrating Multimodal Benchmarks Towards Trustworthy and Efficient Evaluation (2024)
- UBENCH: Benchmarking Uncertainty in Large Language Models with Multiple Choice Questions (2024)
- Investigating How Large Language Models Leverage Internal Knowledge to Perform Complex Reasoning (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper