Datasets:
Dataset Viewer
Full Screen Viewer
Full Screen
The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
M3KE, or Massive Multi-Level Multi-Subject Knowledge Evaluation, is a benchmark developed to assess the knowledge acquired by large Chinese language models by evaluating their multitask accuracy in both zero- and few-shot settings. The benchmark comprises 20,477 questions spanning 71 tasks. For further information about M3KE, please consult our paper or visit our GitHub page.
Load the data
from datasets import load_dataset
ds = load_dataset(
path="TJUNLP/M3KE",
name="Computer Programming Language-Natural Sciences-Other"
)
print(ds)
"""
DatasetDict({
test: Dataset({
features: ['id', 'question', 'A', 'B', 'C', 'D', 'answer'],
num_rows: 236
})
dev: Dataset({
features: ['id', 'question', 'A', 'B', 'C', 'D', 'answer'],
num_rows: 5
})
})
"""
print(ds["test"][0])
"""
{'id': 0, 'question': '下面判断正确的是?', 'A': 'char str[10]={"china"}; 等价于 char str[10];str[]="china";', 'B': 'char *s="china"; 等价于 char *s;s="china"; ', 'C': 'char *a="china"; 等价于 char *a;*a="china";', 'D': 'char c[6]="china",d[6]="china"; 等 价 于 char c[6]=d[6]="china"; ', 'answer': ''}
"""
@misc{liu2023m3ke,
title={M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark for Chinese Large Language Models},
author={Chuang Liu and Renren Jin and Yuqi Ren and Linhao Yu and Tianyu Dong and Xiaohan Peng and Shuting Zhang and Jianxiang Peng and Peiyi Zhang and Qingqing Lyu and Xiaowen Su and Qun Liu and Deyi Xiong},
year={2023},
eprint={2305.10263},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 154