readme: add more training details
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ license: mit
|
|
6 |
---
|
7 |
|
8 |
# German GPT-2 model
|
9 |
-
In this repository we release (yet another) GPT-2 model, that was trained on ~
|
10 |
|
11 |
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉
|
12 |
|
@@ -29,7 +29,7 @@ for identifying biases and how to prevent them, as most research is currently do
|
|
29 |
|
30 |
# Changelog
|
31 |
|
32 |
-
06.09.2021: Initial release. Detailed information about training parameters
|
33 |
|
34 |
# Text Generation
|
35 |
|
@@ -47,6 +47,60 @@ text = pipe("Der Sinn des Lebens ist es", max_length=200)[0]["generated_text"]
|
|
47 |
print(text)
|
48 |
```
|
49 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
50 |
# Acknowledgments
|
51 |
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
|
52 |
Thanks for providing access to the TFRC ❤️
|
|
|
6 |
---
|
7 |
|
8 |
# German GPT-2 model
|
9 |
+
In this repository we release (yet another) GPT-2 model, that was trained on ~90 GB from the ["German colossal, clean Common Crawl corpus" ](https://german-nlp-group.github.io/projects/gc4-corpus.html).
|
10 |
|
11 |
The model is meant to be an entry point for fine-tuning on other texts, and it is definitely not as good or "dangerous" as the English GPT-3 model. We do not plan extensive PR or staged releases for this model 😉
|
12 |
|
|
|
29 |
|
30 |
# Changelog
|
31 |
|
32 |
+
06.09.2021: Initial release. Detailed information about training parameters coming soon.
|
33 |
|
34 |
# Text Generation
|
35 |
|
|
|
47 |
print(text)
|
48 |
```
|
49 |
|
50 |
+
# Training Data
|
51 |
+
|
52 |
+
The following archives are used for training the (first version) of this GPT-2 model:
|
53 |
+
|
54 |
+
* `de_head_0000_2015-48.tar.gz`
|
55 |
+
* `de_head_0000_2016-18.tar.gz`
|
56 |
+
* `de_head_0000_2016-44.tar.gz`
|
57 |
+
* `de_head_0000_2017-13.tar.gz`
|
58 |
+
* `de_head_0000_2017-30.tar.gz`
|
59 |
+
* `de_head_0000_2017-39.tar.gz`
|
60 |
+
* `de_head_0000_2017-51.tar.gz`
|
61 |
+
* `de_head_0000_2018-09.tar.gz`
|
62 |
+
* `de_head_0000_2018-17.tar.gz`
|
63 |
+
* `de_head_0000_2018-30.tar.gz`
|
64 |
+
* `de_head_0000_2018-39.tar.gz`
|
65 |
+
* `de_head_0000_2018-51.tar.gz`
|
66 |
+
* `de_head_0000_2019-18.tar.gz`
|
67 |
+
* `de_head_0000_2019-30.tar.gz`
|
68 |
+
* `de_head_0006_2019-09.tar.gz`
|
69 |
+
* `de_head_0006_2019-18.tar.gz`
|
70 |
+
* `de_head_0006_2019-30.tar.gz`
|
71 |
+
* `de_head_0006_2019-47.tar.gz`
|
72 |
+
* `de_head_0006_2020-10.tar.gz`
|
73 |
+
* `de_head_0007_2018-30.tar.gz`
|
74 |
+
* `de_head_0007_2018-51.tar.gz`
|
75 |
+
* `de_head_0007_2019-09.tar.gz`
|
76 |
+
* `de_head_0007_2019-18.tar.gz`
|
77 |
+
* `de_head_0007_2019-47.tar.gz`
|
78 |
+
* `de_head_0007_2020-10.tar.gz`
|
79 |
+
|
80 |
+
Archives are then extracted and NLTK (`german` model) is used to sentence split the corpus.
|
81 |
+
This results in a total training corpus size of 90GB.
|
82 |
+
|
83 |
+
# Training Details
|
84 |
+
|
85 |
+
We use the recently re-trained `dbmdz/german-gpt2` (version 2!) model as back-bone model.
|
86 |
+
Thus, the tokenizer and vocab is the same as used in the `dbmdz/german-gpt2` model.
|
87 |
+
|
88 |
+
The model was trained on a v3-8 TPU, with the following parameters:
|
89 |
+
|
90 |
+
```bash
|
91 |
+
python ./run_clm_flax.py --output_dir=/mnt/datasets/german-gpt2-larger/ \
|
92 |
+
--name_or_path dbmdz/german-gpt2 --do_train --do_eval --block_size=512 \
|
93 |
+
--per_device_train_batch_size=16 --per_device_eval_batch_size=16 \
|
94 |
+
--learning_rate=5e-3 --warmup_steps=1000 --adam_beta1=0.9 --adam_beta2=0.98 \
|
95 |
+
--weight_decay=0.01 --overwrite_output_dir --num_train_epochs=20 \
|
96 |
+
--logging_steps=500 --save_steps=2500 --eval_steps=2500 \
|
97 |
+
--train_file /mnt/datasets/gc4/train.txt \
|
98 |
+
--validation_file /mnt/datasets/gc4/validation.txt \
|
99 |
+
--preprocessing_num_workers 16
|
100 |
+
```
|
101 |
+
|
102 |
+
Training took around 17 days for 20 epochs.
|
103 |
+
|
104 |
# Acknowledgments
|
105 |
Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC).
|
106 |
Thanks for providing access to the TFRC ❤️
|