Add `task_id` column
#4
by
dennlinger
- opened
- README.md +2 -1
- lbpp/test.csv +0 -0
README.md
CHANGED
@@ -13,8 +13,9 @@ Annotators were instructed to come up with original solution that did not exist
|
|
13 |
|
14 |
### Dataset Fields
|
15 |
This dataset contains the following fields:
|
|
|
16 |
- `language`: denotes the programming language, for this version `python` in all cases
|
17 |
-
- `title`: unique identifier
|
18 |
- `instruction`: a prompt defining unambiguously the task to solve
|
19 |
- `completion`: a proposed gold solution
|
20 |
- `signature`: the exact function signature of the proposed gold solution. As this is used in the unit tests, depending how you wish to prompt the model it might be necessary to include this
|
|
|
13 |
|
14 |
### Dataset Fields
|
15 |
This dataset contains the following fields:
|
16 |
+
- `task_id`: a unique identifier in the format `lbpp/{idx}`, consistent with HumanEval and MBPP
|
17 |
- `language`: denotes the programming language, for this version `python` in all cases
|
18 |
+
- `title`: unique identifier, abstract problem title
|
19 |
- `instruction`: a prompt defining unambiguously the task to solve
|
20 |
- `completion`: a proposed gold solution
|
21 |
- `signature`: the exact function signature of the proposed gold solution. As this is used in the unit tests, depending how you wish to prompt the model it might be necessary to include this
|
lbpp/test.csv
CHANGED
The diff for this file is too large to render.
See raw diff
|
|