File size: 5,019 Bytes
01ccd9f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5620da0
 
 
01ccd9f
 
 
5620da0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ba88520
5620da0
 
 
 
 
 
 
ba88520
 
b487005
ba88520
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
---
dataset_info:
  features:
  - name: hexsha
    dtype: string
  - name: size
    dtype: int64
  - name: ext
    dtype: string
  - name: lang
    dtype: string
  - name: max_stars_repo_path
    dtype: string
  - name: max_stars_repo_name
    dtype: string
  - name: max_stars_repo_head_hexsha
    dtype: string
  - name: max_stars_repo_licenses
    sequence: string
  - name: max_stars_count
    dtype: int64
  - name: max_stars_repo_stars_event_min_datetime
    dtype: string
  - name: max_stars_repo_stars_event_max_datetime
    dtype: string
  - name: max_issues_repo_path
    dtype: string
  - name: max_issues_repo_name
    dtype: string
  - name: max_issues_repo_head_hexsha
    dtype: string
  - name: max_issues_repo_licenses
    sequence: string
  - name: max_issues_count
    dtype: int64
  - name: max_issues_repo_issues_event_min_datetime
    dtype: string
  - name: max_issues_repo_issues_event_max_datetime
    dtype: string
  - name: max_forks_repo_path
    dtype: string
  - name: max_forks_repo_name
    dtype: string
  - name: max_forks_repo_head_hexsha
    dtype: string
  - name: max_forks_repo_licenses
    sequence: string
  - name: max_forks_count
    dtype: int64
  - name: max_forks_repo_forks_event_min_datetime
    dtype: string
  - name: max_forks_repo_forks_event_max_datetime
    dtype: string
  - name: content
    dtype: string
  - name: avg_line_length
    dtype: float64
  - name: max_line_length
    dtype: int64
  - name: alphanum_fraction
    dtype: float64
  splits:
  - name: train
    num_bytes: 78577965159
    num_examples: 11658586
  download_size: 28807934580
  dataset_size: 78577965159
license: other
language:
- code
---
# Dataset Card for "stack-smol-xxl"

This is a subset of the [deduplicated Stack dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup)

It was generated like so:   
```python
from datasets import load_dataset, Dataset
languages = ["css", "prolog", "c", "fortran", "solidity", "kotlin", "literate-agda", "julia", "java-server-pages",
             "isabelle", "idris", "lean", "powershell", "go", "erlang", "f-sharp", "ada", "pascal", "perl", "r", "protocol-buffer",
             "cmake", "sas", "ruby", "rust", "rmarkdown", "c-sharp", "smalltalk", "haskell", "maple", "mathematica", "ocaml",
             "makefile", "lua", "literate-coffeescript", "literate-haskell", "restructuredtext", "racket", "standard-ml",
             "systemverilog", "tex", "awk", "assembly", "alloy", "agda", "emacs-lisp", "dart", "cuda", "bluespec", "augeas", "batchfile",
             "tcsh", "stan", "scala", "tcl", "stata", "applescript", "shell", "clojure", "scheme", "antlr", "sparql", "sql",
             "glsl", "elm", "dockerfile", "cpp", "coffeescript", "common-lisp", "elixir", "groovy", "html", "java", "javascript",
             "markdown", "php", "python", "typescript", "verilog", "visual-basic", "vhdl", "thrift", "matlab", "yacc", "zig", "xslt", "json", "yaml"]

def dset_gen():
    for language in languages:
        dset = load_dataset("bigcode/the-stack-dedup", data_dir=f"data/{language}", streaming=True, split="train")
        sample = dset.take(250_000)
        for row in sample:
            yield row

dset = Dataset.from_generator(dset_gen)
```
## Dataset Structure

```
num_examples: 11658586
download_size: 28807934580
dataset_size: 78577965159
```

### Data Instances
Each data instance corresponds to one file. The content of the file is in the `content` feature, and other features (`repository_name`, `licenses`, etc.) provide some metadata. Note that a given file can appear in several different repositories that satisfy our safe-license criterion. If that is the case, only the first – in alphabetical order -- of these repositories is shown for simplicity.

### Data Fields
- `content` (string): the content of the file.
- `size` (integer): size of the uncompressed file.
- `lang` (string): the programming language. 
- `ext` (string): file extension
- `avg_line_length` (float): the average line-length of the file.
- `max_line_length` (integer): the maximum line-length of the file.
- `alphanum_fraction` (float): the fraction of characters in the file that are alphabetical or numerical characters.
- `hexsha` (string): unique git hash of file
- `max_{stars|forks|issues}_repo_path` (string): path to file in repo containing this file with maximum number of `{stars|forks|issues}`
- `max_{stars|forks|issues}_repo_name` (string): name of repo containing this file with maximum number of `{stars|forks|issues}`
- `max_{stars|forks|issues}_repo_head_hexsha` (string): hexsha of repository head
- `max_{stars|forks|issues}_repo_licenses` (string): licenses in repository 
- `max_{stars|forks|issues}_count` (integer): number of `{stars|forks|issues}` in repository
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_min_datetime` (string): first timestamp of a `{stars|forks|issues}` event
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_max_datetime` (string): last timestamp of a `{stars|forks|issues}` event