Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -65,10 +65,43 @@ dataset_info:
|
|
65 |
num_examples: 11658586
|
66 |
download_size: 28807934580
|
67 |
dataset_size: 78577965159
|
|
|
|
|
|
|
68 |
---
|
69 |
# Dataset Card for "stack-smol-xxl"
|
70 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
## Dataset Structure
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
### Data Instances
|
73 |
Each data instance corresponds to one file. The content of the file is in the `content` feature, and other features (`repository_name`, `licenses`, etc.) provide some metadata. Note that a given file can appear in several different repositories that satisfy our safe-license criterion. If that is the case, only the first – in alphabetical order -- of these repositories is shown for simplicity.
|
74 |
|
@@ -89,27 +122,3 @@ Each data instance corresponds to one file. The content of the file is in the `c
|
|
89 |
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_min_datetime` (string): first timestamp of a `{stars|forks|issues}` event
|
90 |
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_max_datetime` (string): last timestamp of a `{stars|forks|issues}` event
|
91 |
|
92 |
-
num_examples: 11658586
|
93 |
-
download_size: 28807934580
|
94 |
-
dataset_size: 78577965159
|
95 |
-
|
96 |
-
```python
|
97 |
-
from datasets import load_dataset, Dataset
|
98 |
-
languages = ["css", "prolog", "c", "fortran", "solidity", "kotlin", "literate-agda", "julia", "java-server-pages",
|
99 |
-
"isabelle", "idris", "lean", "powershell", "go", "erlang", "f-sharp", "ada", "pascal", "perl", "r", "protocol-buffer",
|
100 |
-
"cmake", "sas", "ruby", "rust", "rmarkdown", "c-sharp", "smalltalk", "haskell", "maple", "mathematica", "ocaml",
|
101 |
-
"makefile", "lua", "literate-coffeescript", "literate-haskell", "restructuredtext", "racket", "standard-ml",
|
102 |
-
"systemverilog", "tex", "awk", "assembly", "alloy", "agda", "emacs-lisp", "dart", "cuda", "bluespec", "augeas", "batchfile",
|
103 |
-
"tcsh", "stan", "scala", "tcl", "stata", "applescript", "shell", "clojure", "scheme", "antlr", "sparql", "sql",
|
104 |
-
"glsl", "elm", "dockerfile", "cpp", "coffeescript", "common-lisp", "elixir", "groovy", "html", "java", "javascript",
|
105 |
-
"markdown", "php", "python", "typescript", "verilog", "visual-basic", "vhdl", "thrift", "matlab", "yacc", "zig", "xslt", "json", "yaml"]
|
106 |
-
|
107 |
-
def dset_gen():
|
108 |
-
for language in languages:
|
109 |
-
dset = load_dataset("bigcode/the-stack-dedup", data_dir=f"data/{language}", streaming=True, split="train")
|
110 |
-
sample = dset.take(250_000)
|
111 |
-
for row in sample:
|
112 |
-
yield row
|
113 |
-
|
114 |
-
dset = Dataset.from_generator(dset_gen)
|
115 |
-
```
|
|
|
65 |
num_examples: 11658586
|
66 |
download_size: 28807934580
|
67 |
dataset_size: 78577965159
|
68 |
+
license: other
|
69 |
+
language:
|
70 |
+
- code
|
71 |
---
|
72 |
# Dataset Card for "stack-smol-xxl"
|
73 |
|
74 |
+
This is a subset of the [deduplicated Stack dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup)
|
75 |
+
|
76 |
+
It was generated like so:
|
77 |
+
```python
|
78 |
+
from datasets import load_dataset, Dataset
|
79 |
+
languages = ["css", "prolog", "c", "fortran", "solidity", "kotlin", "literate-agda", "julia", "java-server-pages",
|
80 |
+
"isabelle", "idris", "lean", "powershell", "go", "erlang", "f-sharp", "ada", "pascal", "perl", "r", "protocol-buffer",
|
81 |
+
"cmake", "sas", "ruby", "rust", "rmarkdown", "c-sharp", "smalltalk", "haskell", "maple", "mathematica", "ocaml",
|
82 |
+
"makefile", "lua", "literate-coffeescript", "literate-haskell", "restructuredtext", "racket", "standard-ml",
|
83 |
+
"systemverilog", "tex", "awk", "assembly", "alloy", "agda", "emacs-lisp", "dart", "cuda", "bluespec", "augeas", "batchfile",
|
84 |
+
"tcsh", "stan", "scala", "tcl", "stata", "applescript", "shell", "clojure", "scheme", "antlr", "sparql", "sql",
|
85 |
+
"glsl", "elm", "dockerfile", "cpp", "coffeescript", "common-lisp", "elixir", "groovy", "html", "java", "javascript",
|
86 |
+
"markdown", "php", "python", "typescript", "verilog", "visual-basic", "vhdl", "thrift", "matlab", "yacc", "zig", "xslt", "json", "yaml"]
|
87 |
+
|
88 |
+
def dset_gen():
|
89 |
+
for language in languages:
|
90 |
+
dset = load_dataset("bigcode/the-stack-dedup", data_dir=f"data/{language}", streaming=True, split="train")
|
91 |
+
sample = dset.take(250_000)
|
92 |
+
for row in sample:
|
93 |
+
yield row
|
94 |
+
|
95 |
+
dset = Dataset.from_generator(dset_gen)
|
96 |
+
```
|
97 |
## Dataset Structure
|
98 |
+
|
99 |
+
```
|
100 |
+
num_examples: 11658586
|
101 |
+
download_size: 28807934580
|
102 |
+
dataset_size: 78577965159
|
103 |
+
```
|
104 |
+
|
105 |
### Data Instances
|
106 |
Each data instance corresponds to one file. The content of the file is in the `content` feature, and other features (`repository_name`, `licenses`, etc.) provide some metadata. Note that a given file can appear in several different repositories that satisfy our safe-license criterion. If that is the case, only the first – in alphabetical order -- of these repositories is shown for simplicity.
|
107 |
|
|
|
122 |
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_min_datetime` (string): first timestamp of a `{stars|forks|issues}` event
|
123 |
- `max_{stars|forks|issues}_repo_{stars|forks|issues}_max_datetime` (string): last timestamp of a `{stars|forks|issues}` event
|
124 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|