Datasets:
Tasks:
Text Generation
Modalities:
Text
Sub-tasks:
language-modeling
Languages:
code
Size:
< 1K
License:
add data
Browse files- README.md +72 -1
- data/cpp/data/humaneval.jsonl +0 -0
- data/go/data/humaneval.jsonl +0 -0
- data/humaneval-x.py +0 -0
- data/java/data/humaneval.jsonl +0 -0
- data/js/data/humaneval.jsonl +0 -0
- data/python/data/humaneval.jsonl +0 -0
- humaneval-x.py +129 -0
README.md
CHANGED
@@ -1,3 +1,74 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators: []
|
3 |
+
language_creators:
|
4 |
+
- crowdsourced
|
5 |
+
- expert-generated
|
6 |
+
language:
|
7 |
+
- code
|
8 |
+
license:
|
9 |
+
- apache-2.0
|
10 |
+
multilinguality:
|
11 |
+
- multilingual
|
12 |
+
pretty_name: HumanEval-X
|
13 |
+
size_categories:
|
14 |
+
- unknown
|
15 |
+
source_datasets: []
|
16 |
+
task_categories:
|
17 |
+
- sequence-modeling
|
18 |
+
task_ids:
|
19 |
+
- language-modeling
|
20 |
---
|
21 |
+
|
22 |
+
# HumanEval-X
|
23 |
+
|
24 |
+
## Dataset Description
|
25 |
+
[HumanEval-X](https://github.com/THUDM/CodeGeeX) is a benchmark for the evaluation of the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks.
|
26 |
+
|
27 |
+
The dataset is currently used for two tasks: code generation and code translation. For code generation, the model uses declaration and docstring as input to generate the solution. For code translation, the model uses declarations in both languages and the solution in the source language as input, to generate solutions in the target language.
|
28 |
+
|
29 |
+
## Languages
|
30 |
+
|
31 |
+
The dataset contains coding problems in 5 programming languages: Python, C++, Java, JavaScript, and Go.
|
32 |
+
|
33 |
+
## Dataset Structure
|
34 |
+
To load the dataset you need to specify a subset among the **5 exiting languages** `[python, cpp, go, java, js]`. By default `python` is loaded.
|
35 |
+
|
36 |
+
```python
|
37 |
+
from datasets import load_dataset
|
38 |
+
load_dataset("loubnabnl/humaneval-x", "js")
|
39 |
+
|
40 |
+
DatasetDict({
|
41 |
+
test: Dataset({
|
42 |
+
features: ['task_id', 'prompt', 'declaration', 'canonical_solution', 'test', 'example_test'],
|
43 |
+
num_rows: 164
|
44 |
+
})
|
45 |
+
})
|
46 |
+
```
|
47 |
+
|
48 |
+
```python
|
49 |
+
next(iter(data["train"]))
|
50 |
+
{'task_id': 'JavaScript/0',
|
51 |
+
'prompt': '/* Check if in given list of numbers, are any two numbers closer to each other than\n given threshold.\n >>> hasCloseElements([1.0, 2.0, 3.0], 0.5)\n false\n >>> hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n true\n */\nconst hasCloseElements = (numbers, threshold) => {\n',
|
52 |
+
'declaration': '\nconst hasCloseElements = (numbers, threshold) => {\n',
|
53 |
+
'canonical_solution': ' for (let i = 0; i < numbers.length; i++) {\n for (let j = 0; j < numbers.length; j++) {\n if (i != j) {\n let distance = Math.abs(numbers[i] - numbers[j]);\n if (distance < threshold) {\n return true;\n }\n }\n }\n }\n return false;\n}\n\n',
|
54 |
+
'test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) === true)\n console.assert(\n hasCloseElements([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) === false\n )\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) === true)\n console.assert(hasCloseElements([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) === false)\n console.assert(hasCloseElements([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) === true)\n console.assert(hasCloseElements([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) === false)\n}\n\ntestHasCloseElements()\n',
|
55 |
+
'example_test': 'const testHasCloseElements = () => {\n console.assert(hasCloseElements([1.0, 2.0, 3.0], 0.5) === false)\n console.assert(\n hasCloseElements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3) === true\n )\n}\ntestHasCloseElements()\n'}
|
56 |
+
```
|
57 |
+
|
58 |
+
|
59 |
+
## Data Fields
|
60 |
+
|
61 |
+
* ``task_id``: indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"].
|
62 |
+
* ``prompt``: the function declaration and docstring, used for code generation.
|
63 |
+
* ``declaration``: only the function declaration, used for code translation.
|
64 |
+
* ``canonical_solution``: human-crafted example solutions.
|
65 |
+
* ``test``: hidden test samples, used for evaluation.
|
66 |
+
* ``example_test``: public test samples (appeared in prompt), used for evaluation.
|
67 |
+
|
68 |
+
## Data Splits
|
69 |
+
|
70 |
+
Each subset has one splits: test.
|
71 |
+
|
72 |
+
## Citation Information
|
73 |
+
|
74 |
+
Refer to https://github.com/THUDM/CodeGeeX.
|
data/cpp/data/humaneval.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
data/go/data/humaneval.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
data/humaneval-x.py
ADDED
File without changes
|
data/java/data/humaneval.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
data/js/data/humaneval.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
data/python/data/humaneval.jsonl
ADDED
The diff for this file is too large to render.
See raw diff
|
|
humaneval-x.py
ADDED
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
"""HumanEval-X dataset."""
|
16 |
+
|
17 |
+
|
18 |
+
import json
|
19 |
+
import datasets
|
20 |
+
|
21 |
+
|
22 |
+
|
23 |
+
_DESCRIPTION = """
|
24 |
+
HumanEval-X is a benchmark for the evaluation of the multilingual ability of code generative models. \
|
25 |
+
It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks.
|
26 |
+
"""
|
27 |
+
|
28 |
+
_HOMEPAGE = "https://github.com/THUDM/CodeGeeX"
|
29 |
+
|
30 |
+
def get_url(name):
|
31 |
+
urls = {"test": f"data/{name}/data/humaneval.jsonl"}
|
32 |
+
return urls
|
33 |
+
|
34 |
+
def split_generator(dl_manager, name):
|
35 |
+
downloaded_files = get_url(name)
|
36 |
+
downloaded_files = dl_manager.download(get_url(name))
|
37 |
+
return [
|
38 |
+
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": downloaded_files["test"]}),
|
39 |
+
]
|
40 |
+
|
41 |
+
|
42 |
+
class HumanEvalXConfig(datasets.BuilderConfig):
|
43 |
+
"""BuilderConfig """
|
44 |
+
|
45 |
+
def __init__(self, name, description, features, **kwargs):
|
46 |
+
super(HumanEvalXConfig, self).__init__(version=datasets.Version("2.1.0", ""), **kwargs)
|
47 |
+
self.name = name
|
48 |
+
self.description = description
|
49 |
+
self.features = features
|
50 |
+
|
51 |
+
|
52 |
+
class HumanEvalX(datasets.GeneratorBasedBuilder):
|
53 |
+
VERSION = datasets.Version("1.0.0")
|
54 |
+
BUILDER_CONFIGS = [
|
55 |
+
HumanEvalXConfig(
|
56 |
+
name="python",
|
57 |
+
description="Python HumanEval",
|
58 |
+
features=["task_id", "prompt", "declaration", "canonical_solution", "test", "example_test"]
|
59 |
+
),
|
60 |
+
HumanEvalXConfig(
|
61 |
+
name="cpp",
|
62 |
+
description="C++ HumanEval",
|
63 |
+
features=["task_id", "prompt", "declaration", "canonical_solution", "test", "example_test"]
|
64 |
+
),
|
65 |
+
|
66 |
+
HumanEvalXConfig(
|
67 |
+
name="go",
|
68 |
+
description="Go HumanEval",
|
69 |
+
features=["task_id", "prompt", "declaration", "canonical_solution", "test", "example_test"]
|
70 |
+
),
|
71 |
+
HumanEvalXConfig(
|
72 |
+
name="java",
|
73 |
+
description="Java HumanEval",
|
74 |
+
features=["task_id", "prompt", "declaration", "canonical_solution", "test", "example_test"]
|
75 |
+
),
|
76 |
+
|
77 |
+
HumanEvalXConfig(
|
78 |
+
name="js",
|
79 |
+
description="JavaScript HumanEval",
|
80 |
+
features=["task_id", "prompt", "declaration", "canonical_solution", "test", "example_test"]
|
81 |
+
),
|
82 |
+
]
|
83 |
+
DEFAULT_CONFIG_NAME = "python"
|
84 |
+
|
85 |
+
def _info(self):
|
86 |
+
return datasets.DatasetInfo(
|
87 |
+
description=_DESCRIPTION,
|
88 |
+
features=datasets.Features({"task_id": datasets.Value("string"),
|
89 |
+
"prompt": datasets.Value("string"),
|
90 |
+
"declaration": datasets.Value("string"),
|
91 |
+
"canonical_solution": datasets.Value("string"),
|
92 |
+
"test": datasets.Value("string"),
|
93 |
+
"example_test": datasets.Value("string"),
|
94 |
+
}),
|
95 |
+
homepage=_HOMEPAGE,
|
96 |
+
)
|
97 |
+
|
98 |
+
def _split_generators(self, dl_manager):
|
99 |
+
if self.config.name == "python":
|
100 |
+
return split_generator(dl_manager, self.config.name)
|
101 |
+
|
102 |
+
elif self.config.name == "cpp":
|
103 |
+
return split_generator(dl_manager, self.config.name)
|
104 |
+
|
105 |
+
elif self.config.name == "go":
|
106 |
+
return split_generator(dl_manager, self.config.name)
|
107 |
+
|
108 |
+
elif self.config.name == "java":
|
109 |
+
return split_generator(dl_manager, self.config.name)
|
110 |
+
|
111 |
+
elif self.config.name == "js":
|
112 |
+
return split_generator(dl_manager, self.config.name)
|
113 |
+
|
114 |
+
def _generate_examples(self, filepath):
|
115 |
+
key = 0
|
116 |
+
with open(filepath) as f:
|
117 |
+
for line in f:
|
118 |
+
row = json.loads(line)
|
119 |
+
key += 1
|
120 |
+
yield key, {
|
121 |
+
"task_id": row["task_id"],
|
122 |
+
"prompt": row["prompt"],
|
123 |
+
"declaration": row["declaration"],
|
124 |
+
"canonical_solution": row["canonical_solution"],
|
125 |
+
"test": row["test"],
|
126 |
+
"example_test": row["example_test"],
|
127 |
+
|
128 |
+
}
|
129 |
+
key += 1
|