Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
Dask
License:
oop / README.md
wangtian123's picture
Update README.md
79b972a verified
---
license: mit
task_categories:
- text-generation
tags:
- code
- dataset
size_categories:
- n<1K
language:
- en
pretty_name: CodeEval
---
license: apache-2.0
---
# Dataset Card for Object-Oriented Programming
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/alphadl/OOP-eval)
- **Paper:** [Object-Oriented Programming Evaluation Benchmark for LLMs](https://arxiv.org/abs/2401.06628)
### Dataset Summary
The OOP benchmark consists of 431 instances, and contains three difficulty levels: Simple-level OOP, Moderate-level OOP, and Difficult-level OOP.
### Supported Tasks and Leaderboards
### Languages
The Object-Oriented Programming problems are written in Python and contain English natural text in comments and docstrings.
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("oop")
DatasetDict({
test: Dataset({
features: ['task_id', 'question', 'canonical_solution', 'test_list', 'test_function', 'entry_point', 'test_matching', 'test_match_function'],
num_rows: 431
})
})
```
### Data Instances
#### OOP benchmark
```
{
'task_id': 'OOP/0',
'question': 'First, write a **WDS** class using the Python language. Then, within the WDS class, create a public function called **without_duplicates** to implement finding the length of the longest substring in a given string **s** that does not contain any duplicate characters.',
'test_function': 'def test_run(content1):\n return WDS().without_duplicates(content1)',
'test_list': [
'assert candidate("abcabcbb")==3',
'assert candidate("bbbbb")==1',
'assert candidate("pwwkew")==3'],
'entry_point': 'test_run',
'test_matching': 'assert candidate([["class WDS", "def without_duplicates"]]) == True',
'test_match_function': 'def matching_function(content):\n def run_match(text):\n for task in text:\n if task not in str_content:\n return False\n return True\n len_cont = len(content)\n if len_cont==1 and run_match(content[0]) == True:\n return True\n elif (len_cont==2 and run_match(content[0]) == True) or (len_cont==2 and run_match(content[1]) == True):\n return True\n else:\n return False'
}
```
### Data Fields
- `task_id`: identifier for the data sample
- `question`: description of programming task
- `test_function`: run function for the test
- 'test_list': list of tests to verify solution
- `entry_point`: entry point for test
- 'test_matching': list of tests to verify solution
- 'test_match_function': matching function for the test
### Data Splits
The OOP dataset only consists of a test split with 431 samples.
## Dataset Creation
See section 3.2 of original [paper](https://arxiv.org/abs/2401.06628).
### Citation Information
```
@inproceedings{wang2024oop,
title={OOP: Object-Oriented Programming Evaluation Benchmark for Large Language Models},
author={Shuai Wang and Liang Ding and Li Shen and Yong Luo and Bo Du and Dacheng Tao},
year={2024},
booktitle={Findings of the Association for Computational Linguistics: ACL 2023},
url={https://arxiv.org/abs/2401.06628},
}
```
### Contributions
Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.