Datasets:
Tasks:
Text Generation
Formats:
parquet
Languages:
English
Size:
10K - 100K
ArXiv:
Tags:
code
License:
Update README.md
Browse files
README.md
CHANGED
@@ -40,17 +40,193 @@ dataset_info:
|
|
40 |
dtype: string
|
41 |
splits:
|
42 |
- name: cross_file_first
|
43 |
-
num_bytes:
|
44 |
-
num_examples:
|
45 |
- name: cross_file_random
|
46 |
-
num_bytes:
|
47 |
-
num_examples:
|
48 |
- name: in_file
|
49 |
-
num_bytes:
|
50 |
-
num_examples:
|
51 |
-
download_size:
|
52 |
-
dataset_size:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
---
|
54 |
-
#
|
55 |
|
56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
dtype: string
|
41 |
splits:
|
42 |
- name: cross_file_first
|
43 |
+
num_bytes: 504528431
|
44 |
+
num_examples: 8033
|
45 |
- name: cross_file_random
|
46 |
+
num_bytes: 467242455
|
47 |
+
num_examples: 7618
|
48 |
- name: in_file
|
49 |
+
num_bytes: 488999100
|
50 |
+
num_examples: 7910
|
51 |
+
download_size: 472994299
|
52 |
+
dataset_size: 1460769986
|
53 |
+
license: cc
|
54 |
+
task_categories:
|
55 |
+
- text-generation
|
56 |
+
language:
|
57 |
+
- en
|
58 |
+
tags:
|
59 |
+
- code
|
60 |
---
|
61 |
+
# RepoBench v1.1 (Java)
|
62 |
|
63 |
+
## Introduction
|
64 |
+
|
65 |
+
This dataset presents the **Java** portion of [RepoBench](https://arxiv.org/abs/2306.03091) v1.1 (ICLR 2024). The data encompasses a collection from GitHub, spanning the period from **October 6th to November 31st, 2023**. With a commitment to data integrity, we've implemented a deduplication process based on file content against the Stack v2 dataset (coming soon), aiming to mitigate data leakage and memorization concerns.
|
66 |
+
|
67 |
+
## Resources and Links
|
68 |
+
|
69 |
+
- [Paper](https://arxiv.org/abs/2306.03091)
|
70 |
+
- [GitHub](https://github.com/Leolty/repobench)
|
71 |
+
- [Dataset Introduction](https://github.com/Leolty/repobench/blob/main/data/README.md)
|
72 |
+
|
73 |
+
## FAQs
|
74 |
+
|
75 |
+
- **Q:** What do the features in the dataset mean?
|
76 |
+
|
77 |
+
**A:** Imagine you're coding and you want to write the next line of your code. The dataset provides you the following information:
|
78 |
+
- `repo_name` (string): the name of the repository
|
79 |
+
- `file_path` (string): the path of the current file
|
80 |
+
- `context` (list): the cross-file code snippets that might be helpful for writing the next line:
|
81 |
+
- `identifier` (string): the identifier of the code snippet
|
82 |
+
- `path` (string): the path of the code snippet
|
83 |
+
- `snippet` (string): the code snippet
|
84 |
+
- `import_statement` (string): the import statement of the current file
|
85 |
+
- `cropped_code` (string): the cropped code of the current file (up to previous 120 lines)
|
86 |
+
- `all_code` (string): the entire code of the current file (not cropped)
|
87 |
+
- `next_line` (string): the next line of the code (this serves as the target)
|
88 |
+
- `gold_snippet_index` (int): the index of the gold snippet in the context (which will be used in next line, just for reference, you should not use this for next line prediction)
|
89 |
+
- `created_at` (string): the creation time of the repository
|
90 |
+
- `level` (string): the level of next line completion, which is measured by the number of tokens for the whole prompt (including all the context, import statement, cropped code and some neccessary separator tokens)
|
91 |
+
|
92 |
+
- **Q:** How does the level be defined?
|
93 |
+
|
94 |
+
**A:** The level is determined by the number of tokens for the whole prompt (including all the context, import statement, cropped code and some neccessary separator tokens). The token number is calculated by the tokenizer of GPT-4 by using [tiktoken](https://github.com/openai/tiktoken). The following table shows the level definition:
|
95 |
+
|
96 |
+
| Level | Prompt Length (Number of Tokens) |
|
97 |
+
|-------|------------------------|
|
98 |
+
| 2k | 640 - 1,600 |
|
99 |
+
| 4k | 1,600 - 3,600 |
|
100 |
+
| 8k | 3,600 - 7,200 |
|
101 |
+
| 12k | 7,200 - 10,800 |
|
102 |
+
| 16k | 10,800 - 14,400 |
|
103 |
+
| 24k | 14,400 - 21,600 |
|
104 |
+
| 32k | 21,600 - 28,800 |
|
105 |
+
| 64k | 28,800 - 57,600 |
|
106 |
+
| 128k | 57,600 - 100,000 |
|
107 |
+
|
108 |
+
- **Q:** What does the different splits mean?
|
109 |
+
|
110 |
+
**A:** The dataset is split into three parts:
|
111 |
+
- `cross_file_first`: the next line of code utilizes content from a cross-file code snippet and it is its first usage within current file.
|
112 |
+
- `cross_file_random`: the next line of code utilizes content from a cross-file code snippet and it is NOT its first usage within current file.
|
113 |
+
- `in_file`: the next line of code does not utilize content from a cross-file code snippet.
|
114 |
+
|
115 |
+
- **Q:** How to construct the prompt for next line prediction?
|
116 |
+
|
117 |
+
**A:** We hereby provide the official implementation for constructing prompts. Please note that the methods described below are not necessarily the optimal way of construction. Reordering, retrieval argumentation, or employing different cropping/construction techniques could potentially lead to varying degrees of improvement. Ensure that your model evaluations are conducted in a fair manner.
|
118 |
+
|
119 |
+
```python
|
120 |
+
import re
|
121 |
+
|
122 |
+
def construct_prompt(
|
123 |
+
data: dict,
|
124 |
+
language: str = "python",
|
125 |
+
tokenizer= None,
|
126 |
+
max_token_nums: int = 15800
|
127 |
+
) -> str:
|
128 |
+
"""
|
129 |
+
Construct the prompt for next line prediction.
|
130 |
+
|
131 |
+
:param data: data point from the dataset
|
132 |
+
:param language: the language of the code
|
133 |
+
:param tokenizer: the tokenizer of the evaluation model
|
134 |
+
:param max_token_nums: the maximum number of tokens constraint for the prompt
|
135 |
+
|
136 |
+
:return: the constructed prompt
|
137 |
+
"""
|
138 |
+
|
139 |
+
# comment symbol for different languages
|
140 |
+
comment_symbol = "#" if language == "python" else "//"
|
141 |
+
|
142 |
+
# construct the cross-file prompt and in-file prompt separately
|
143 |
+
# cross-file prompt
|
144 |
+
cross_file_prompt = f"{comment_symbol} Repo Name: {data['repo_name']}\n"
|
145 |
+
|
146 |
+
for snippet in data['context']:
|
147 |
+
cross_file_prompt += f"{comment_symbol} Path: {snippet['path']}\n{snippet['snippet']}" + "\n\n"
|
148 |
+
|
149 |
+
# in-file prompt
|
150 |
+
in_file_prompt = f"{comment_symbol} Path: {data['file_path']}\n{data['import_statement']}\n{data['cropped_code']}\n"
|
151 |
+
|
152 |
+
# if we assign the tokenizer and the max_token_nums, we will truncate the cross-file prompt to meet the constraint
|
153 |
+
if tokenizer is not None and max_token_nums is not None:
|
154 |
+
|
155 |
+
cross_file_prompt_token_nums = len(tokenizer.encode(cross_file_prompt))
|
156 |
+
in_file_prompt_token_nums = len(tokenizer.encode(in_file_prompt))
|
157 |
+
|
158 |
+
exceed_token_nums = cross_file_prompt_token_nums + in_file_prompt_token_nums - max_token_nums
|
159 |
+
|
160 |
+
if exceed_token_nums > 0:
|
161 |
+
# split the cross-file prompt into lines
|
162 |
+
cross_file_prompt_lines = cross_file_prompt.split("\n")
|
163 |
+
# drop lines from end until the extra token number is less than 0
|
164 |
+
for i in range(len(repo_prompt_lines)-1, -1, -1):
|
165 |
+
extra_token_num -= len(tokenizer.encode(cross_file_prompt_lines[i]))
|
166 |
+
if extra_token_num < 0:
|
167 |
+
break
|
168 |
+
|
169 |
+
# join the lines back
|
170 |
+
cross_file_prompt = "\n".join(cross_file_prompt_lines[:i+1]) + "\n\n"
|
171 |
+
|
172 |
+
# combine the cross-file prompt and in-file prompt
|
173 |
+
prompt = cross_file_prompt + in_file_prompt
|
174 |
+
|
175 |
+
# normalize some empty lines
|
176 |
+
prompt = re.sub(r'\n{4,}', '\n\n', prompt)
|
177 |
+
|
178 |
+
return prompt
|
179 |
+
```
|
180 |
+
|
181 |
+
- **Q:** How to load the dataset?
|
182 |
+
|
183 |
+
**A:** You can simply use the following code to load the dataset:
|
184 |
+
|
185 |
+
```python
|
186 |
+
from datasets import load_dataset
|
187 |
+
|
188 |
+
dataset = load_dataset("tianyang/repobench_java_v1.1")
|
189 |
+
```
|
190 |
+
|
191 |
+
To construct the prompt for next line prediction, you can refer to the official implementation provided in the previous question and use the `construct_prompt` function to construct the prompt, for example:
|
192 |
+
|
193 |
+
```python
|
194 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
195 |
+
|
196 |
+
tokenizer = AutoTokenizer.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base")
|
197 |
+
model = AutoModelForCausalLM.from_pretrained("deepseek-ai/deepseek-coder-1.3b-base")
|
198 |
+
|
199 |
+
prompt = construct_prompt(dataset['cross_file_first'][0], language="java", tokenizer=tokenizer, max_token_nums=15800)
|
200 |
+
```
|
201 |
+
|
202 |
+
- **Q:** How often will the dataset be updated?
|
203 |
+
|
204 |
+
**A:** We plan to update the dataset every three months, but there might be slight delays considering the time required for data crawling and our own schedules. If you require updated data, please feel free to contact us, and we can coordinate the timing and expedite the process.
|
205 |
+
|
206 |
+
- **Q:** What models should I use to evaluate the dataset?
|
207 |
+
|
208 |
+
**A:** RepoBench is designed to evaluate base models, not those that have been instruction fine-tuned. Please use base models for evaluation.
|
209 |
+
|
210 |
+
- **Q:** I am training a new model but the knowledge cutoff date is after the dataset's. Can you provide me with the latest data?
|
211 |
+
|
212 |
+
**A:** Sure! We are happy to provide you with the latest data (even customized data with specific requirements). Please feel free to contact us.
|
213 |
+
|
214 |
+
- **Q:** Can I opt-out?
|
215 |
+
|
216 |
+
**A:** Yes, you can opt-out your repository from the dataset. Please check [Am I in RepoBench?](https://huggingface.co/spaces/tianyang/in-the-repobench), we will upload the raw data of the repository information we crawled at least 15 days before the dataset creation and release. We will respect your decision and remove your repository from the dataset if you opt-out.
|
217 |
+
|
218 |
+
## Citation
|
219 |
+
|
220 |
+
If you find RepoBench useful in your research, please consider citing the paper using the following BibTeX entry:
|
221 |
+
|
222 |
+
```bibtex
|
223 |
+
@misc{liu2023repobench,
|
224 |
+
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
|
225 |
+
author={Tianyang Liu and Canwen Xu and Julian McAuley},
|
226 |
+
year={2024},
|
227 |
+
url={https://arxiv.org/abs/2306.03091},
|
228 |
+
booktitle={International Conference on Learning Representations}
|
229 |
+
}
|
230 |
+
```
|
231 |
+
|
232 |
+
Your interest and contributions to RepoBench are immensely valued. Happy coding! 🚀
|