File size: 931 Bytes
7cbeb16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d8fefa9
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
dataset_info:
  features:
  - name: lang
    dtype: string
  - name: seed
    dtype: string
  splits:
  - name: train
    num_bytes: 3114466
    num_examples: 10000
  download_size: 1629429
  dataset_size: 3114466
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
This dataset contains 10000 random snippets of 5-15 lines parsed from [`bigcode/starcoderdata`](https://huggingface.co/datasets/bigcode/starcoderdata).

Specifically, I consider 10 languages: Haskell, Python, cpp, java, typescript, shell, csharp, rust, php, and swift. And, I collect 1000 documents for each language, and then extract 5-15 random lines from the document to create this dataset.

See MagiCoder and their [seed collection](https://github.com/ise-uiuc/magicoder/blob/main/experiments/collect_seed_documents.py#L35) process. In my usecase, I needed some inspiration documents for generating synthetic datasets.