Update README.md
Browse files
README.md
CHANGED
@@ -1,13 +1,5 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
- config_name: default
|
4 |
-
data_files:
|
5 |
-
- split: train
|
6 |
-
path: data/train-*
|
7 |
-
- split: test
|
8 |
-
path: data/test-*
|
9 |
-
- split: valid
|
10 |
-
path: data/valid-*
|
11 |
dataset_info:
|
12 |
features:
|
13 |
- name: hexsha
|
@@ -24,17 +16,43 @@ dataset_info:
|
|
24 |
dtype: float64
|
25 |
splits:
|
26 |
- name: train
|
27 |
-
num_bytes:
|
28 |
-
num_examples:
|
29 |
- name: test
|
30 |
-
num_bytes:
|
31 |
-
num_examples:
|
32 |
- name: valid
|
33 |
-
num_bytes:
|
34 |
-
num_examples:
|
35 |
-
download_size:
|
36 |
-
dataset_size:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
---
|
38 |
-
# Dataset Card for "the-stack-swift-clean"
|
39 |
|
40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: openrail
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
dataset_info:
|
4 |
features:
|
5 |
- name: hexsha
|
|
|
16 |
dtype: float64
|
17 |
splits:
|
18 |
- name: train
|
19 |
+
num_bytes: 3582248477.9086223
|
20 |
+
num_examples: 806789
|
21 |
- name: test
|
22 |
+
num_bytes: 394048264.9973618
|
23 |
+
num_examples: 88747
|
24 |
- name: valid
|
25 |
+
num_bytes: 3982797.09401595
|
26 |
+
num_examples: 897
|
27 |
+
download_size: 1323156008
|
28 |
+
dataset_size: 3980279540
|
29 |
+
task_categories:
|
30 |
+
- text-generation
|
31 |
+
language:
|
32 |
+
- code
|
33 |
+
tags:
|
34 |
+
- code
|
35 |
+
pretty_name: TheStack-Swift
|
36 |
+
size_categories:
|
37 |
+
- 1M<n<10M
|
38 |
---
|
|
|
39 |
|
40 |
+
## Dataset 1: TheStack - Swift - Cleaned
|
41 |
+
|
42 |
+
**Description**: This dataset is drawn from TheStack Corpus, an open-source code dataset with over 3TB of GitHub data covering 48 programming languages. We selected a small portion of this dataset to optimize smaller language models for Swift, a popular statically typed language.
|
43 |
+
|
44 |
+
**Target Language**: Swift
|
45 |
+
|
46 |
+
**Dataset Size**:
|
47 |
+
- Training: 900,000 files
|
48 |
+
- Validation: 50,000 files
|
49 |
+
- Test: 50,000 files
|
50 |
+
|
51 |
+
**Preprocessing**:
|
52 |
+
1. Selected Swift as the target language due to its popularity on GitHub.
|
53 |
+
2. Filtered out files with average line length > 100 characters, maximum line length > 1000 characters, and alphabet ratio < 25%.
|
54 |
+
3. Split files into 90% training, 5% validation, and 5% test sets.
|
55 |
+
|
56 |
+
**Tokenizer**: Byte Pair Encoding (BPE) tokenizer with tab and whitespace tokens. GPT-2 vocabulary extended with special tokens.
|
57 |
+
|
58 |
+
**Training Sequences**: Sequences constructed by joining training data text to reach a context length of 2048 tokens (1024 tokens for full fine-tuning).
|