File size: 2,362 Bytes
72c38f6 c9d75d3 72c38f6 c35b46a 9300230 e99c43f a467ebd e99c43f 93ea7d2 e99c43f a467ebd cdc26c4 4ea501d e99c43f aed167b e99c43f a2d2234 aed167b a2d2234 aed167b a467ebd aed167b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
language:
- "lb"
license: "mit"
tags:
- "luxembourgish"
- "lëtzebuergesch"
- "text generation"
model-index:
- name: "LuxGPT2"
results:
- task:
type: "text-generation" # Required. Example: automatic-speech-recognition
name: "Text Generation" # Optional. Example: Speech Recognition
dataset:
type: "LuxembourgishTestDataset" # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: "Luxembourgish Test Dataset" # Required. A pretty name for the dataset. Example: Common Voice (French)
metrics:
- type: "accuracy" # Required. Example: wer. Use metric id from https://hf.co/metrics
value: "0.33" # Required. Example: 20.90
- name: "LuxGPT2"
results:
- task:
type: "text-generation" # Required. Example: automatic-speech-recognition
name: "Text Generation" # Optional. Example: Speech Recognition
dataset:
type: "LuxembourgishTestDataset" # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
name: "Luxembourgish Test Dataset" # Required. A pretty name for the dataset. Example: Common Voice (French)
metrics:
- type: "perplexity" # Required. Example: wer. Use metric id from https://hf.co/metrics
value: "46.69" # Required. Example: 20.90
---
## LuxGPT-2
GPT-2 model for Text Generation in luxembourgish language, trained on 667 MB of text data, consisting of RTL.lu news articles, comments, parlament speeches, the luxembourgish Wikipedia, Newscrawl, Webcrawl and subtitles.
The training took place on a 32 GB Nvidia Tesla V100
- with an initial learning rate of 5e-5
- with Batch size 4
- for 109 hours
- for 30 epochs
- using the transformers library
<br/>
more detailed training information can be found in the "trainer_state.json".
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("laurabernardy/LuxGPT2")
model = AutoModelForCausalLM.from_pretrained("laurabernardy/LuxGPT2")
```
## Limitations and Biases
See the [GPT2 model card](https://huggingface.co/gpt2) for considerations on limitations and bias. See the [GPT2 documentation](https://huggingface.co/transformers/model_doc/gpt2.html) for details on GPT2.
|