Search is not available for this dataset
repo_id
int64
0
79
repo_name
stringlengths
14
39
project_context
stringlengths
70
73.2M
file_context
list
gt
sequence
metainfo_separator
stringclasses
1 value
25
ossf__fuzz-introspector
"ossf__fuzz-introspector METASEP\n\nCODE_OF_CONDUCT.md METASEP\n# Contributor Covenant Code of Condu(...TRUNCATED)
[{"content":"# Copyright 2022 Fuzz Introspector Authors\n#\n# Licensed under the Apache License, Ver(...TRUNCATED)
[" fuzz_blockers = self.get_fuzz_blockers("," self.html_create_dedicated_calltree_(...TRUNCATED)
METASEP
19
vanheeringen-lab__seq2science
"vanheeringen-lab__seq2science METASEP\n\nCHANGELOG.md METASEP\n# Changelog\nAll notable changes to (...TRUNCATED)
[{"content":"import re\nimport networkx as nx\n\nrules_to_keep = {\n # \"0.13 0.6 0.85\", # yell(...TRUNCATED)
[" f.write(line)"," for a, b in self._order_edges():"," source_id =(...TRUNCATED)
METASEP
7
Significant-Gravitas__Auto-GPT
"Significant-Gravitas__Auto-GPT METASEP\n\nMakefile METASEP\n.PHONY: update-protocol\n\nupdate-proto(...TRUNCATED)
[{"content":"\"\"\"\nRoutes for the Agent Service.\n\nThis module defines the API routes for the Age(...TRUNCATED)
["async def list_agent_tasks_ids(request: Request) -> List[str]:","async def download_agent_task_art(...TRUNCATED)
METASEP
7
Significant-Gravitas__Auto-GPT
"Significant-Gravitas__Auto-GPT METASEP\n\nMakefile METASEP\n.PHONY: update-protocol\n\nupdate-proto(...TRUNCATED)
[{"content":"import json\nimport logging\nimport logging.config\nimport logging.handlers\nimport os\(...TRUNCATED)
[" console_formatter = ConsoleFormatter(self.COLOR_FORMAT)"," return logging.Forma(...TRUNCATED)
METASEP
7
Significant-Gravitas__Auto-GPT
"Significant-Gravitas__Auto-GPT METASEP\n\nCONTRIBUTING.md METASEP\n\nTo contribute to this GitHub p(...TRUNCATED)
[{"content":"from autogpt import browse\nimport json\nfrom autogpt.memory import get_memory\nimport (...TRUNCATED)
[" agent_response = message_agent(key, prompt)"," if is_valid_int(key):"," agent_respon(...TRUNCATED)
METASEP
49
christophermayes__xopt
"christophermayes__xopt METASEP\n\nMANIFEST.in METASEP\ninclude requirements.txt\ninclude LICENSEinc(...TRUNCATED)
[{"content":"\n# This file helps to compute a version number in source trees obtained from\n# git-ar(...TRUNCATED)
["@register_vcs_handler(\"git\", \"get_keywords\")"," raise NotThisMethod(\"'git rev-parse --(...TRUNCATED)
METASEP
49
christophermayes__xopt
"christophermayes__xopt METASEP\n\nMANIFEST.in METASEP\ninclude requirements.txt\ninclude LICENSEinc(...TRUNCATED)
[{"content":"\nfrom xopt.generators.ga import deap_creator\nfrom xopt.generators.ga.deap_fitness_wit(...TRUNCATED)
[" self.children.extend(self.create_children())"," return [random.uniform(a, b) fo(...TRUNCATED)
METASEP
11
cerfacs-globc__icclim
"cerfacs-globc__icclim METASEP\n\nMANIFEST.in METASEP\ninclude LICENSE\ninclude NOTICE\ninclude READ(...TRUNCATED)
[{"content":"import os\nimport shutil\n\nimport numpy as np\nimport pandas as pd\nimport pytest\nimp(...TRUNCATED)
[" ds_res, chunk_it = read_dataset(self.OUTPUT_NC_FILE)"," ds_res, chunk_it = read_dat(...TRUNCATED)
METASEP
30
TransformerOptimus__SuperAGI
"TransformerOptimus__SuperAGI METASEP\n\nCODE_OF_CONDUCT.md METASEP\n# Contributor Covenant Code of (...TRUNCATED)
[{"content":"\"\"\"Tool Model migration\n\nRevision ID: 517185535767\nRevises: 518f8576edfc\nCreate (...TRUNCATED)
[" # ### commands auto generated by Alembic - please adjust! ###"," )",""," sa.Column('name(...TRUNCATED)
METASEP
30
TransformerOptimus__SuperAGI
"TransformerOptimus__SuperAGI METASEP\n\nCODE_OF_CONDUCT.md METASEP\n# Contributor Covenant Code of (...TRUNCATED)
[{"content":"\"\"\"Changed Model migration\n\nRevision ID: 518f8576edfc\nRevises: 809e90f7f2a2\nCrea(...TRUNCATED)
[""," )"," sa.Column('organisation_id', sa.INTEGER(), autoincrement=False, nullable=True),"," (...TRUNCATED)
METASEP

Repository Level Code Completion Dataset for Evaluation

This is a dataset of repository snapshots before a commit where a python file has been added. One needs to complete added file with given content of repository composed in different ways.

How to load the data

  1. via load_dataset:

    from datasets import load_dataset
    
    data_files =  # choose from the table below
    dataset = load_dataset("jenyag/repo-code-completion", data_files=data_files, split="train")
    

Options for data_files:

all_context non_py_context py_context
function class mask half composer data/function_class_mask_half_composer/all_context/test-* data/function_class_mask_half_composer/non_py_context/test-* data/function_class_mask_half_composer/py_context/test-*
imports first composer data/imports_first_composer/all_context/test-* data/imports_first_composer/non_py_context/test-* data/imports_first_composer/py_context/test-*
alphabetical composer data/alphabetical_composer/all_context/test-* data/alphabetical_composer/non_py_context/test-* data/alphabetical_composer/py_context/test-*
naive composer data/naive_composer/all_context/test-* data/naive_composer/non_py_context/test-* data/naive_composer/py_context/test-*
path distance composer data/path_distance_composer/all_context/test-* data/path_distance_composer/non_py_context/test-* data/path_distance_composer/py_context/test-*
file length composer data/file_length_composer/all_context/test-* data/file_length_composer/non_py_context/test-* data/file_length_composer/py_context/test-*
half memory composer data/half_memory_composer/all_context/test-* data/half_memory_composer/non_py_context/test-* data/half_memory_composer/py_context/test-*
function class mask one composer data/function_class_mask_one_composer/all_context/test-* data/function_class_mask_one_composer/non_py_context/test-* data/function_class_mask_one_composer/py_context/test-*

How to get the full context for the specific line

for datapoint in dataset:
  project_context = datapoint['project_context']  # The project context may be quite long
  for file_context_dict, ground_truth in zip(datapoint['file_context'], datapoint['gt']):
    file_context = file_context_dict['content']
    full_context = project_context + file_context 
Downloads last month
145
Edit dataset card