Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Size:
< 1K
DOI:
Libraries:
Datasets
pandas
License:
File size: 4,395 Bytes
a597971
 
 
 
 
 
 
 
 
 
 
 
c35114a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- en
tags:
- code
- benchmark
pretty_name: InfiBench
size_categories:
- 100K<n<1M
---

## InfiBench (Data Part)

Note: For full description, please visit our main website https://infi-coder.github.io/infibench.

This repo contains all data of our code LLM evaluation dataset InfiBench. `suite_v2.1.yaml` lists the case list and `suite_v2.1_data.csv` records all data (prompt, reference answer, evaluation metric). The data can be directly consumed by our automatic evaluation tool to evaluate any model's response.

### Dataset Card
---

**Name**: InfiBench

**Description**: Evaluation Datset for the Question-Answering Capabilities of Code Large Language Models

**URL**: https://infi-coder.github.io/infibench (all info) / https://huggingface.co/datasets/llylly001/InfiBench (data part)

**Version**: 2.1

**License**: Creative Commons Attribution Share Alike 4.0

**Citation**: 

```
@misc{infibench,
    title={InfiBench: Evaluating the Question-Answering Capabilities of Code Large Language Models},
    howpublished = "\url{https://infi-coder.github.io/infibench}",
    author={InfiBench},
    year={2024}
}
```

**DOI**: doi:10.57967/hf/2474

**Data Collection**: 

- Data source is downloaded from the publicly available StackExchange archive (https://archive.org/download/stackexchange, https://ia904700.us.archive.org/view_archive.php?archive=/6/items/stackexchange/stackoverflow.com-Posts.7z). Especially, we use the preprocessed version from https://huggingface.co/datasets/mikex86/stackoverflow-posts where all posts are formatted in Markdown text.

- We choose to keep only the questions that have at least three positively voted answers and an officially accepted answer, which turn out to be 1,090,238 questions. For these one million questions, we choose to keep questions that are frequently viewed and relatively new.

- Utilizing the mandatory question tags of these questions, we then manually construct a tag tree that covers the 200 most frequent tags, enabling us to identify the top programming languages and areas for 14,330105
out of these 17,402 questions. We exclude 6 programming languages that either describe data or are domain-specific: JSON, regex, Markdown, YAML, CSV, and SQL. As a result, we compile 13,854 questions that serve as the initial seed set.

- We randomly sample from the initial seed set. Then we recruited five domain experts inside our company to create the benchmark from the sampled initial seed set, each in charge of one area. The annotation process is composed of three steps: (1) Question Selection and Type Annotation; (2) Prompt Paraphrasing. (3) Correctness Criterion Annotation.

**Data Biases**:

The data essentially serves as an evaluation benchmark. We foresee data biases in following aspects:

(1) Non-standard evaluation. Alongside the data is a comprehensive benchmark of existing code LLMs. The benchmark scores are evaluated under a specific set of hyperparameters (e.g, temperature 0.2, top probability 0.9, best@10 at question level). Data usage under different evaluation conditions may result in misleading comparison results and conclusions.

(2) Usage misinterpretation. The benchmark focuses on evaluating the response correctness of code LLMs  for a set of real-world developers' questions. Our evaluation standard does not specifically take other aspects (naturalness, conciseness, fairness, politeness, etc) into consideration. Hence, this is risk of overinterpreting the evaluation results.  When evaluating a code LLM, we recommend combining this benchmark score with other evaluations to be a more comprehensive evaluation.

(3) Potential data contamination. Though we have made our efforts to reduce the impact of data contamination, future code LLMs may train or fine-tune on this benchmark dataset to improve the score on InfiBench. This could be challenging to prevent as a cost of being fully public. On the other hand, as responsible LLM developers, we hope future practitioners would report how they use the benchmark data if beyond the original scope (for evaluation use).


**Personal Sensitive Informaton**:

During the data construction process, our domain experts paraphrased the question prompts to remove personal and sensitive information (PII) and a cross validation stage is introduced to further ensure the PII removal.