Datasets:

Modalities:
Tabular
Text
Formats:
json
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
Dask
galtimur's picture
Update README.md
28c12e4 verified
|
raw
history blame
5.34 kB
metadata
configs:
  - config_name: python
    data_files:
      - split: test
        path: data/python/*.json

๐ŸŸ๏ธ Long Code Arena (CI Fixing)

๐Ÿ› ๏ธ CI Fixing: given logs of a failed GitHub Actions workflow and the corresponding repository shapshot, fix the repository contents in order to make the workflow pass.

This is the benchmark for CI Fixing task as part of ๐ŸŸ๏ธ Long Code Arena benchmark.

To score your model on this dataset, you can use CI Fixing benchmark (https://github.com/JetBrains-Research/lca-baselines/tree/main/ci-fixing/ci-fixing-benchmark)

How-to

  1. List all the available configs via datasets.get_dataset_config_names and choose an appropriate one.

    Current configs: python

  2. Load the data via load_dataset:

    from datasets import load_dataset
    
    dataset = load_dataset("JetBrains-Research/lca-ci-fixing", split="test")
    

    Note that all the data we have is considered to be in the test split.

Dataset Structure

This dataset contains logs of the failed GitHub Action workflows for some commits followed by the commit that passes the workflow successfully.

Note that, unlike many other ๐ŸŸ Long Code Arena datasets, this dataset doesn't contain repositories.

  • Our CI Fixing benchmark (๐Ÿšง todo) clones the necessary repos to the user's local machine. The user should run their model to fix the failing CI workflows, and the benchmark will push commits to GitHub, returning the results of the workflow runs for all the datapoints.

Datapoint Schema

Each example has the following fields:

Field Description
contributor Username of the contributor that committed changes
difficulty Difficulty of the problem (assessor-based)
diff Contents of the diff between the failed and the successful commits
head_branch Name of the original branch that the commit was pushed at
id Unique ID of the datapoint
language Main language of the repo
logs List of dicts with keys log (logs of the failed job, particular step) and step_name (name of the failed step of the job)
repo_name Name of the original repo (second part of the owner/name on GitHub)
repo owner Owner of the original repo (first part of the owner/name on GitHub)
sha_fail SHA of the failed commit
sha_success SHA of the successful commit
workflow Contents of the workflow file
workflow_filename The name of the workflow file (without directories)
workflow_name The name of the workflow
workflow_path The full path to the workflow file
changed_files List of files changed in diff
commit_link URL to commit corresponding to failed job

Datapoint Example

{'contributor': 'Gallaecio',
 'diff': 'diff --git a/scrapy/crawler.py b/scrapy/crawler.py/n<...>',
 'difficulty': '1',
 'head_branch': 'component-getters',
 'id': 18,
 'language': 'Python',
 'logs': [{'log': '##[group]Run pip install -U tox\n<...>',
           'step_name': 'checks (3.12, pylint)/4_Run check.txt'}],
 'repo_name': 'scrapy',
 'repo_owner': 'scrapy',
 'sha_fail': '0f71221cf9875ed8ef3400e1008408e79b6691e6',
 'sha_success': 'c1ba9ccdf916b89d875628ba143dc5c9f6977430',
 'workflow': 'name: Checks\non: [push, pull_request]\n\n<...>',
 'workflow_filename': 'checks.yml',
 'workflow_name': 'Checks',
 'workflow_path': '.github/workflows/checks.yml',
 'changed_files': ["scrapy/crawler.py"],
 'commit_link': "https://github.com/scrapy/scrapy/tree/0f71221cf9875ed8ef3400e1008408e79b6691e6"}