Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
Libraries:
Datasets
pandas
License:

Incorrect HumanEval problems after decoding and saving them to files

#4
by ahmadhatahet - opened

Hello,

I am trying to compile the test cases for python and java.

I followed your example here and replaced the LANG = "lua"
with "py" once, then "java".
After running the script, the generated problems seemed incorrect.

For example, "HumanEval_103_rounded_avg.py" the file is totally different from the one in your GitHub repo here
Not only that, but the decoded problem in the .py file has also incorrect solution.

The same goes for the "HumanEval_106_f" for both java and python files.
Additionally, some solutions are not only logically wrong, also have incorrect syntax, like: missing parentheses or even a return statement.

From your introduction, you are using "... little compilers to translate them to other languages ...".
Maybe I am doing something wrong.
However, the generated data is unusable for my use case, which involves testing runnable code.

Could you take a look at it?

Here is the code used to generate the java problem, just switch to LANG = "py" for python:

´´´
from IPython.display import JSON
import datasets
from transformers import AutoTokenizer, AutoModelForCausalLM
from pathlib import Path

LANG = "java"
MODEL_NAME = "Salesforce/codegen-350M-multi"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME).half().cuda()
problems = datasets.load_dataset("nuprl/MultiPL-E", f"humaneval-{LANG}")

def stop_at_stop_token(decoded_string, problem):
"""
Truncates the output at stop tokens, taking care to skip the prompt
which may have stop tokens.
"""
min_stop_index = len(decoded_string)
for stop_token in problem["stop_tokens"]:
stop_index = decoded_string.find(stop_token)
if stop_index != -1 and stop_index > len(problem["prompt"]) and stop_index < min_stop_index:
min_stop_index = stop_index
return decoded_string[:min_stop_index]

path_caching = Path().cwd() / "MultiPL-E_problems" / LANG
path_caching.mkdir(exist_ok=True, parents=True)

for problem in problems["test"]:
filename = path_caching / (problem["name"] + "." + LANG)
if filename.is_file(): continue
filename.touch()
input_ids = tokenizer(
problem["prompt"],
return_tensors="pt",
).input_ids.cuda()
generated_ids = model.generate(
input_ids, max_length=1024, pad_token_id=tokenizer.eos_token_id + 2
)
truncated_string = stop_at_stop_token(tokenizer.decode(generated_ids[0]), problem)
with open(filename, "w") as f:
print(f"Created {filename}")
f.write(truncated_string)
f.write("\n")
f.write(problem["tests"])
´´´

Kind Regards

Northeastern University Programming Research Lab org

Hi @ahmadhatahet !
I think you are misinterpreting what MultiPL-E is supposed to do and be used for.
The compilers only translate the function signature, docstring, and the tests. Then, the function signature and docstring is used as a prompt to evaluate large language models, by making them complete the function and verifying the completion against the test suite. You can read our paper for more information: https://ieeexplore.ieee.org/abstract/document/10103177

Thank you, the paper clarifies every thing.
Very nice work.

ahmadhatahet changed discussion status to closed

Sign up or log in to comment