Datasets:

Modalities:
Tabular
Text
Formats:
json
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
Dask
File size: 5,875 Bytes
6fb6557
 
88deb3b
 
 
 
81ec333
 
3b5f705
81ec333
3b5f705
 
 
ff8d321
 
81ec333
3b5f705
81ec333
412bd11
28c12e4
88deb3b
 
 
 
 
 
 
 
 
 
 
 
 
 
412bd11
88deb3b
 
49a8baa
3b5f705
88deb3b
28c12e4
88deb3b
 
 
 
 
3b5f705
88deb3b
412bd11
 
 
88deb3b
 
 
 
 
 
 
 
 
412bd11
88deb3b
 
 
3b5f705
88deb3b
3b5f705
 
88deb3b
 
 
 
 
988e221
 
 
88deb3b
 
81ec333
afe2f09
88deb3b
 
 
412bd11
88deb3b
 
 
 
 
 
 
 
 
 
 
 
0f6a23d
 
 
88deb3b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
configs:
  - config_name: python
    data_files:
      - split: test
        path: data/python/*.json
---

# 🏟️ Long Code Arena (CI builds repair)

This is the benchmark for CI builds repair task as part of the
🏟️ [Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).

🛠️ Task. Given the logs of a failed GitHub Actions workflow and the corresponding repository snapshot, 
repair the repository contents in order to make the workflow pass.

All the data is collected from repositories published under permissive licenses (MIT, Apache-2.0, BSD-3-Clause, and BSD-2-Clause). The datapoints can be removed upon request.

To score your model on this dataset, you can use [**CI build repair benchmark**](https://github.com/JetBrains-Research/lca-baselines/tree/main/ci-builds-repair/ci-builds-repair-benchmark).

## How-to

1. List all the available configs
   via [`datasets.get_dataset_config_names`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.get_dataset_config_names)
   and choose an appropriate one.

   Current configs: `python`

2. Load the data
   via [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):

    ```
    from datasets import load_dataset

    dataset = load_dataset("JetBrains-Research/lca-ci-builds-repair", split="test")
    ```

   Note that all the data we have is considered to be in the test split.  
   **NOTE**: If you encounter any errors with loading the dataset on Windows, update the `datasets` library (was tested on `datasets==2.16.1`)


## Dataset Structure

This dataset contains logs of the failed GitHub Action workflows for some commits
followed by the commit that passes the workflow successfully.

Note that, unlike other 🏟️ Long Code Arena datasets, this dataset does not contain repositories.

* Our [CI builds repair benchmark](https://github.com/JetBrains-Research/lca-baselines/tree/main/ci-builds-repair/ci-builds-repair-benchmark) clones the necessary repos to the user's local machine. 
  The user should run their model to repair the failing CI workflows, and the benchmark will push commits to GitHub,
  returning the results of the workflow runs for all the datapoints.

### Datapoint Schema


Each example has the following fields:

| Field               | Description                                                                                                                  |
|---------------------|------------------------------------------------------------------------------------------------------------------------------|
| `contributor`       | Username of the contributor that committed changes                                                                           |
| `difficulty`        | Difficulty of the problem (assessor-based. 1 means that the repair requires only the code formatting)                                                                                   |
| `diff`              | Contents of the diff between the failed and the successful commits                                                           |
| `head_branch`       | Name of the original branch that the commit was pushed at                                                                    |
| `id`                | Unique ID of the datapoint                                                                                                   |
| `language`          | Main language of the repository                                                                                                    |
| `logs`              | List of dicts with keys `log` (logs of the failed job, particular step) and `step_name` (name of the failed step of the job) |
| `repo_name`         | Name of the original repository (second part of the `owner/name` on GitHub)                                                        |
| `repo owner`        | Owner of the original repository (first part of the `owner/name` on GitHub)                                                        |
| `sha_fail`          | SHA of the failed commit                                                                                                     |
| `sha_success`       | SHA of the successful commit                                                                                                 |
| `workflow`          | Contents of the workflow file                                                                                                |
| `workflow_filename` | The name of the workflow file (without directories)                                                                          |
| `workflow_name`     | The name of the workflow                                                                                                     |
| `workflow_path`     | The full path to the workflow file     
| `changed_files`     | List of files changed in diff
| `commit_link`       | URL to commit corresponding to failed job

### Datapoint Example


```
{'contributor': 'Gallaecio',
 'diff': 'diff --git a/scrapy/crawler.py b/scrapy/crawler.py/n<...>',
 'difficulty': '2',
 'head_branch': 'component-getters',
 'id': 18,
 'language': 'Python',
 'logs': [{'log': '##[group]Run pip install -U tox\n<...>',
           'step_name': 'checks (3.12, pylint)/4_Run check.txt'}],
 'repo_name': 'scrapy',
 'repo_owner': 'scrapy',
 'sha_fail': '0f71221cf9875ed8ef3400e1008408e79b6691e6',
 'sha_success': 'c1ba9ccdf916b89d875628ba143dc5c9f6977430',
 'workflow': 'name: Checks\non: [push, pull_request]\n\n<...>',
 'workflow_filename': 'checks.yml',
 'workflow_name': 'Checks',
 'workflow_path': '.github/workflows/checks.yml',
 'changed_files': ["scrapy/crawler.py"],
 'commit_link': "https://github.com/scrapy/scrapy/tree/0f71221cf9875ed8ef3400e1008408e79b6691e6"}
```