license: apache-2.0 | |
tags: | |
- generated_from_keras_callback | |
model-index: | |
- name: Joeni/distilgpt2-finetuned-shakespeare | |
results: [] | |
<!-- This model card has been generated automatically according to the information Keras had access to. You should | |
probably proofread and complete it, then remove this comment. --> | |
# Joeni/distilgpt2-finetuned-shakespeare | |
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. | |
It achieves the following results on the evaluation set: | |
- Train Loss: 3.1768 | |
- Validation Loss: 3.5077 | |
- Epoch: 19 | |
## Model description | |
More information needed | |
## Intended uses & limitations | |
More information needed | |
## Training and evaluation data | |
More information needed | |
## Training procedure | |
### Training hyperparameters | |
The following hyperparameters were used during training: | |
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} | |
- training_precision: float32 | |
### Training results | |
| Train Loss | Validation Loss | Epoch | | |
|:----------:|:---------------:|:-----:| | |
| 4.2091 | 3.8246 | 0 | | |
| 3.9001 | 3.6943 | 1 | | |
| 3.7802 | 3.6283 | 2 | | |
| 3.7044 | 3.5885 | 3 | | |
| 3.6483 | 3.5617 | 4 | | |
| 3.5981 | 3.5464 | 5 | | |
| 3.5571 | 3.5362 | 6 | | |
| 3.5193 | 3.5247 | 7 | | |
| 3.4848 | 3.5205 | 8 | | |
| 3.4528 | 3.5129 | 9 | | |
| 3.4202 | 3.5090 | 10 | | |
| 3.3915 | 3.5002 | 11 | | |
| 3.3608 | 3.5041 | 12 | | |
| 3.3321 | 3.4999 | 13 | | |
| 3.3039 | 3.4969 | 14 | | |
| 3.2783 | 3.4997 | 15 | | |
| 3.2511 | 3.5030 | 16 | | |
| 3.2269 | 3.5071 | 17 | | |
| 3.2013 | 3.5086 | 18 | | |
| 3.1768 | 3.5077 | 19 | | |
### Framework versions | |
- Transformers 4.26.1 | |
- TensorFlow 2.11.0 | |
- Datasets 2.10.1 | |
- Tokenizers 0.13.2 | |