File size: 2,189 Bytes
eb78256
da8c099
 
 
 
 
 
 
 
 
 
eb78256
cf9eb2f
eb78256
1430faa
eb78256
cf9eb2f
eb78256
6a20c57
 
 
eb78256
6a20c57
 
 
 
 
 
 
eb78256
07ee7d0
eb78256
6a20c57
eb78256
6a20c57
eb78256
6a20c57
eb78256
d49310c
 
6a20c57
 
eb78256
8612bb1
 
eb78256
 
6a20c57
eb78256
6a20c57
eb78256
6a20c57
 
eb78256
8612bb1
cf9eb2f
6a20c57
eb78256
6a20c57
eb78256
6a20c57
eb78256
6a20c57
eb78256
6a20c57
eb78256
 
6a20c57
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69

---
license: apache-2.0
datasets:
- oscar-corpus/OSCAR-2109
language:
- pl
- en
pipeline_tag: text-generation
library_name: transformers
---

# B-GPT_pl_en_sequential

This is a bilingual GPT-2 style model. For the first half of training, this model was trained only on Polish data. In the second half of training, the model was trained on only English data. At the end of training, 50% of training data seen by the model is Polish and 50% is English. The tokenizer was trained on the same overall proportions of data as the language model at the final step. 

## Model details:

All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)!
Details for this model specifically:

* Architecture: gpt2
* Parameters: 124770816
* Maximum sequence length: 512 tokens
* Training tokens: 12B
* Vocabulary size: 50000
* Compute cost: ~9 NVIDIA A6000 GPU hours
* CO2 Emission: 1.17 kg

Training dataset: [OSCAR 2021/09](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)

Checkpoints are taken at training steps: 0, 10000, 20000, 30000, 40000, 50000, 64000, 64010, 64020, 64030, 64040, 64050, 64060, 64070, 64080, 64090, 64100, 64110, 64120, 64130, 64140, 64150, 64160, 64170, 64180, 64190, 64200, 64300, 64400, 64500, 64600, 64700, 64800, 64900, 65000, 66000, 67000, 68000, 69000, 70000, 80000, 90000, 100000, 110000, 120000, 128000.

## Use This Model

Load the model:

Note: if you do not specify a revision, it will load the final checkpoint of the model. See above for the list of checkpoints. The checkpoint step is the name of the revision.

```
from transformers import AutoTokenizer, AutoModel

tokenizer = AutoTokenizer.from_pretrained("catherinearnett/B-GPT_pl_en_sequential")
model = AutoModel.from_pretrained("catherinearnett/B-GPT_pl_en_sequential", revision = "128000")


````

Text Generation:

```
from transformers import pipeline

pipe = pipeline("text-generation", model="catherinearnett/B-GPT_pl_en_sequential")
    
pipe("I am a")

```

## Citation

If you use this model, please cite:

```


```