|
--- |
|
license: cc-by-nc-nd-4.0 |
|
task_categories: |
|
- text-generation |
|
language: |
|
- en |
|
tags: |
|
- croissant |
|
- code |
|
size_categories: |
|
- 10M<n<100M |
|
--- |
|
# Dataset Card for Dataset Name |
|
|
|
SimCoPilot is a benchmark for evaluating LLMs to perform as a "copilot"-style, interactive coding assistant. |
|
|
|
## Dataset Details |
|
|
|
### Dataset Description |
|
|
|
SimCoPilot is a benchmark for evaluating LLMs to perform as a "copilot"-style, interactive coding assistant, testing their ability to add and complete code in complex real-world software environments and analyzing how LLMs manage different code dependencies and logic complexities. |
|
|
|
- **Curated by:** Mingchao Jiang |
|
- **Funded by [optional]:** [More Information Needed] |
|
- **Shared by [optional]:** [More Information Needed] |
|
- **Language(s) (NLP):** Python & Java |
|
- **License:** CC BY-NC-ND 4.0 |
|
|
|
### Dataset Sources [optional] |
|
|
|
The source code and supporting material can be found in the Github link below |
|
|
|
- **Repository:** https://github.com/mj33rice/SimCoPilot |
|
- **Paper [optional]:** [More Information Needed] |
|
- **Demo [optional]:** [More Information Needed] |
|
|
|
## Uses |
|
|
|
Program Synthesis, Benchmarking and Research, Development of New Coding Tools |
|
|
|
### Direct Use |
|
|
|
Program Synthesis, Benchmarking and Research, Development of New Coding Tools |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
Commercial Purposes, Infer Personal Information |
|
|
|
|
|
## Dataset Structure |
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
The dataset comprises 11 columns, detailed as follows: |
|
|
|
- `task_type`: Identifies the task category, with options including `infilling_java`, `completion_java`, `infilling_python`, and `completion_python`. |
|
|
|
- `code_task`: Describes the nature of the coding tasks. For Java, the tasks involve advanced academic projects focusing on text processing, data structures (such as AVL, B-tree, M-Tree), and statistical algorithms. Python tasks span from notebook scripts to object-oriented programming, covering areas like linear programming, computer vision, and reinforcement learning. |
|
|
|
- `start_line` and `end_line`: Specify the beginning and ending line numbers of the code segment targeted for completion. |
|
|
|
- `before`, `between`, and `after`: Capture the code preceding the target code block, the ground truth of the target code block, and the code following the target block, respectively. |
|
|
|
- `reason_categories_output`: A collection of dictionaries detailing the `usage_line` for logical components within the target code block, including elements like `If Body`, `If Condition`, `Loop Body`, etc. |
|
|
|
- `horizon_categories_output`: Documents the programming constructs such as `Global_Variable`, `Function`, `Class`, along with their `define_line` and `usage_line`. |
|
|
|
- `reason_freq_analysis` and `horizon_freq_analysis`: These dictionaries tally the occurrences within `reason_categories_output` and `horizon_categories_output`, respectively. |
|
|
|
## Dataset Creation |
|
|
|
### Curation Rationale |
|
|
|
Currently, the most widely-used benchmarks for checking the ability of AI models to perform program synthesis (``AI-for-code'') consist of a detailed English description of a concise, self-contained code to synthesize, as well as a few test cases to test the correctness of the synthesized code. |
|
While such benchmarks are useful, they match one particularly narrow use case, where the goal is to synthesize a relatively short, complete, standalone program. |
|
|
|
We introduce SimCoPilot, a novel benchmark crafted to simulate the ability of an AI such as a large language model (LLM) to perform as a ``copilot''-style, interactive coding assistant. |
|
|
|
### Source Data |
|
|
|
Source Code |
|
|
|
#### Data Collection and Processing |
|
|
|
Emails were sent to faculty and students within the Rice University Computer Science, Electrical Engineering, and Statistics departments, inviting them to contribute Java and Python code private repositories for AI-for-code research. |
|
Upon receipt, 1,163 code generation tasks were curated to ensure a diverse and representative sample of real-world code, gathering approximately 11,000 lines of code. |
|
|
|
|
|
|
|
#### Who are the source data producers? |
|
|
|
The dataset includes Java and Python code contributions primarily from students and faculty at Rice University's Computer Science department, Electrical Engineering, and Statistics departments, |
|
representing a community of academic programmers and developers. |
|
|
|
### Annotations [optional] |
|
|
|
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> |
|
|
|
#### Annotation process |
|
|
|
Each of the 1,163 programming tasks was created from eight Java repositories and seven Python repositories, totaling nearly 11,000 lines of code. |
|
|
|
Our team went through these codes, generating both infill and completion tasks. |
|
To create an infill task, the annotator picks a meaningful starting point for the AI-for-code model to begin writing code (at the beginning of the boolean if condition, or at the beginning of the body of a for loop, for example) |
|
and then marks the rest of that particular code block for deletion, to be re-created by the AI-for-code model. |
|
|
|
In the case of an if condition, the entire boolean predicate would be marked for deletion. |
|
In the case of a for-loop body, the entire body would be marked. |
|
|
|
A completion task is created in much the same way, but the code for the remainder of the method or function is marked for deletion. |
|
|
|
#### Who are the annotators? |
|
|
|
A team of graduate students from Rice Univerity with medium to advanced programming skill level with 5-10 years experience each. |
|
|
|
#### Personal and Sensitive Information |
|
|
|
N/A |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
Sample Bias: Contributions mainly from students and faculty at a single institution (Rice University) could reflect a biased sample of coding styles, proficiency levels, and problem-solving approaches. |
|
Overfitting Risks: Models trained on this dataset might perform well on similar academic or controlled environments but may not generalize well to diverse coding tasks outside of these parameters. |
|
|
|
### Recommendations |
|
|
|
Diversifying the Data Sources: Expand the dataset to include code from a broader range of contributors beyond the academic circle of Rice University. This could involve soliciting code from developers in different industries, countries, and cultural backgrounds to enhance the dataset's diversity and representativeness. |
|
Cross-Validation with External Datasets: Use external datasets for cross-validation of the AI models trained with this dataset. This helps in assessing the model’s performance and generalizability to other coding environments and tasks. |
|
|
|
## Citation [optional] |
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
|
|
[More Information Needed] |
|
|
|
**APA:** |
|
|
|
[More Information Needed] |
|
|
|
## Glossary [optional] |
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> |
|
|
|
[More Information Needed] |
|
|
|
## More Information [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Dataset Card Authors [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Dataset Card Contact |
|
|
|
[More Information Needed] |