galtimur commited on
Commit
412bd11
โ€ข
1 Parent(s): 3b5f705

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -10
README.md CHANGED
@@ -11,12 +11,12 @@ configs:
11
  This is the benchmark for CI builds repair task as part of the
12
  ๐ŸŸ๏ธ [Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
13
 
14
- > ๐Ÿ› ๏ธ CI build repair: given the logs of a failed GitHub Actions workflow and the corresponding repository shapshot, fix the
15
  > repository contents in order to make the workflow pass.
16
 
17
  All the data is collected from repositories published under permissive licenses (MIT, Apache-2.0, BSD-3-Clause, and BSD-2-Clause). The datapoints can be removed upon request.
18
 
19
- To score your model on this dataset, you can use [**CI build repair benchmark**](https://github.com/JetBrains-Research/lca-baselines/tree/main/ci-fixing/ci-fixing-benchmark).
20
 
21
  ## How-to
22
 
@@ -32,7 +32,7 @@ To score your model on this dataset, you can use [**CI build repair benchmark**]
32
  ```
33
  from datasets import load_dataset
34
 
35
- dataset = load_dataset("JetBrains-Research/lca-ci-fixing", split="test")
36
  ```
37
 
38
  Note that all the data we have is considered to be in the test split.
@@ -46,11 +46,9 @@ followed by the commit that passes the workflow successfully.
46
 
47
  Note that, unlike other ๐ŸŸ๏ธ Long Code Arena datasets, this dataset does not contain repositories.
48
 
49
- * Our [CI builds repair benchmark](https://github.com/JetBrains-Research/lca-baselines/tree/main/ci-fixing/ci-fixing-benchmark) clones the necessary repos to the user's local machine. The user should run
50
- their model to
51
- fix the failing CI workflows, and the benchmark will push commits to GitHub, returning the results of the workflow
52
- runs
53
- for all the datapoints.
54
 
55
  ### Datapoint Schema
56
 
@@ -60,7 +58,7 @@ Each example has the following fields:
60
  | Field | Description |
61
  |---------------------|------------------------------------------------------------------------------------------------------------------------------|
62
  | `contributor` | Username of the contributor that committed changes |
63
- | `difficulty` | Difficulty of the problem (assessor-based. 0 means that the fix requires only the code formatting) |
64
  | `diff` | Contents of the diff between the failed and the successful commits |
65
  | `head_branch` | Name of the original branch that the commit was pushed at |
66
  | `id` | Unique ID of the datapoint |
@@ -83,7 +81,7 @@ Each example has the following fields:
83
  ```
84
  {'contributor': 'Gallaecio',
85
  'diff': 'diff --git a/scrapy/crawler.py b/scrapy/crawler.py/n<...>',
86
- 'difficulty': '1',
87
  'head_branch': 'component-getters',
88
  'id': 18,
89
  'language': 'Python',
 
11
  This is the benchmark for CI builds repair task as part of the
12
  ๐ŸŸ๏ธ [Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
13
 
14
+ > ๐Ÿ› ๏ธ CI build repair: given the logs of a failed GitHub Actions workflow and the corresponding repository shapshot, repair the
15
  > repository contents in order to make the workflow pass.
16
 
17
  All the data is collected from repositories published under permissive licenses (MIT, Apache-2.0, BSD-3-Clause, and BSD-2-Clause). The datapoints can be removed upon request.
18
 
19
+ To score your model on this dataset, you can use [**CI build repair benchmark**](https://github.com/JetBrains-Research/lca-baselines/tree/main/ci-builds-repair/ci-builds-repair-benchmark).
20
 
21
  ## How-to
22
 
 
32
  ```
33
  from datasets import load_dataset
34
 
35
+ dataset = load_dataset("JetBrains-Research/lca-ci-builds-repair", split="test")
36
  ```
37
 
38
  Note that all the data we have is considered to be in the test split.
 
46
 
47
  Note that, unlike other ๐ŸŸ๏ธ Long Code Arena datasets, this dataset does not contain repositories.
48
 
49
+ * Our [CI builds repair benchmark](https://github.com/JetBrains-Research/lca-baselines/tree/main/ci-builds-repair/ci-builds-repair-benchmark) clones the necessary repos to the user's local machine.
50
+ The user should run their model to repair the failing CI workflows, and the benchmark will push commits to GitHub,
51
+ returning the results of the workflow runs for all the datapoints.
 
 
52
 
53
  ### Datapoint Schema
54
 
 
58
  | Field | Description |
59
  |---------------------|------------------------------------------------------------------------------------------------------------------------------|
60
  | `contributor` | Username of the contributor that committed changes |
61
+ | `difficulty` | Difficulty of the problem (assessor-based. 1 means that the repair requires only the code formatting) |
62
  | `diff` | Contents of the diff between the failed and the successful commits |
63
  | `head_branch` | Name of the original branch that the commit was pushed at |
64
  | `id` | Unique ID of the datapoint |
 
81
  ```
82
  {'contributor': 'Gallaecio',
83
  'diff': 'diff --git a/scrapy/crawler.py b/scrapy/crawler.py/n<...>',
84
+ 'difficulty': '2',
85
  'head_branch': 'component-getters',
86
  'id': 18,
87
  'language': 'Python',