Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Dataset description
|
2 |
+
|
3 |
+
This release incorporates the complete data sequence used in our CrystalCoder training, covering data sequences from the three pre-training stages. The data is a combination of two previous works: [SlimPajama dataset](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata). We divide them across three stages with different weights.
|
4 |
+
|
5 |
+
## Stage 1
|
6 |
+
During this stage, we utilize half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
|
7 |
+
|
8 |
+
## Stage 2
|
9 |
+
In the second stage, the remaining half of the [SlimPajama data](https://huggingface.co/datasets/cerebras/SlimPajama-627B) is employed, along with two epochs of [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata). For StarCoder data, we implement [FIM augmentation](https://arxiv.org/abs/2207.14255) with an FIM rate of 0.9.
|
10 |
+
|
11 |
+
## Stage 3
|
12 |
+
The third stage involves the reuse of Python, HTML, CSS, and JavaScript data from the [StarCoder data](https://huggingface.co/datasets/bigcode/starcoderdata) for training over three epochs. In this stage, FIM with a rate of 0.3 is applied. Additionally, a small portion of the SlimPajama dataset except the Github part is also reused.
|
13 |
+
|
14 |
+
# Primary usage
|
15 |
+
|
16 |
+
This dataset is used for our training of our CrystalCoder, and also for further reproduction.
|
17 |
+
|
18 |
+
# License:
|