Update README.md
Browse files
README.md
CHANGED
@@ -49,7 +49,8 @@ Introducing **SauerkrautLM-1.5b** – our Sauerkraut version of the powerful [Qw
|
|
49 |
## Training Procedure
|
50 |
This model is a demo intended to showcase the potential of resource-efficient training of large language models using Spectrum CPT. Here's a brief on the procedure:
|
51 |
|
52 |
-
**Continuous Pre-training (CPT) on German Data**:
|
|
|
53 |
Spectrum with 25% layer targeting consumed 309.78GB at a batch size of 2048.
|
54 |
Full Fine-tuning targeting 100% of layers used 633.55GB at the same batch size.
|
55 |
Using Spectrum, we enhanced the German language capabilities of the Qwen2-1.5B model via CPT while achieving substantial resource savings.
|
@@ -60,8 +61,10 @@ In the German Rag evaluation, it is on par with 8 billion parameter models and,
|
|
60 |
|
61 |
Despite the large volume of German CPT data, the model competes well against the Qwen2-1.5B-Instruct model and performs significantly better in German.
|
62 |
|
63 |
-
**Post-CPT Training**:
|
64 |
-
|
|
|
|
|
65 |
|
66 |
## Objective and Results
|
67 |
|
|
|
49 |
## Training Procedure
|
50 |
This model is a demo intended to showcase the potential of resource-efficient training of large language models using Spectrum CPT. Here's a brief on the procedure:
|
51 |
|
52 |
+
**Continuous Pre-training (CPT) on German Data**:
|
53 |
+
Utilizing Spectrum by Eric Hartford, Lucas Atkins, Fernando Fernandes Neto, and David Golchinfar, the model targeted 25% of its layers during training. This approach allowed significant resource savings:
|
54 |
Spectrum with 25% layer targeting consumed 309.78GB at a batch size of 2048.
|
55 |
Full Fine-tuning targeting 100% of layers used 633.55GB at the same batch size.
|
56 |
Using Spectrum, we enhanced the German language capabilities of the Qwen2-1.5B model via CPT while achieving substantial resource savings.
|
|
|
61 |
|
62 |
Despite the large volume of German CPT data, the model competes well against the Qwen2-1.5B-Instruct model and performs significantly better in German.
|
63 |
|
64 |
+
**Post-CPT Training**:
|
65 |
+
The model underwent 3 epochs of Supervised Fine-Tuning (SFT) with 700K samples.
|
66 |
+
**Further Steps**:
|
67 |
+
The model was aligned with Direct Preference Optimization (DPO) using 70K samples.
|
68 |
|
69 |
## Objective and Results
|
70 |
|