|
--- |
|
tags: |
|
- generated_from_trainer |
|
model-index: |
|
- name: full-lstm-4 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# full-lstm-4 |
|
|
|
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 3.9704 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-05 |
|
- train_batch_size: 32 |
|
- eval_batch_size: 32 |
|
- seed: 4 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- training_steps: 3052726 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | |
|
|:-------------:|:-----:|:-------:|:---------------:| |
|
| 4.8004 | 0.03 | 76319 | 4.7641 | |
|
| 4.5121 | 0.03 | 152638 | 4.4806 | |
|
| 4.3684 | 1.03 | 228957 | 4.3448 | |
|
| 4.2765 | 0.03 | 305276 | 4.2616 | |
|
| 4.2121 | 1.03 | 381595 | 4.2050 | |
|
| 4.1608 | 0.03 | 457914 | 4.1635 | |
|
| 4.1255 | 1.03 | 534233 | 4.1324 | |
|
| 4.0959 | 0.03 | 610552 | 4.1078 | |
|
| 4.0666 | 0.03 | 686871 | 4.0883 | |
|
| 4.0436 | 1.03 | 763190 | 4.0722 | |
|
| 4.0239 | 0.03 | 839509 | 4.0580 | |
|
| 4.0089 | 1.03 | 915828 | 4.0473 | |
|
| 3.9888 | 0.03 | 992148 | 4.0371 | |
|
| 3.9758 | 1.03 | 1068468 | 4.0294 | |
|
| 3.9699 | 0.03 | 1144788 | 4.0218 | |
|
| 3.9587 | 1.03 | 1221108 | 4.0161 | |
|
| 3.9389 | 0.03 | 1297428 | 4.0109 | |
|
| 3.931 | 1.03 | 1373748 | 4.0059 | |
|
| 3.9193 | 0.03 | 1450068 | 4.0013 | |
|
| 3.9126 | 1.03 | 1526388 | 3.9978 | |
|
| 3.9121 | 0.03 | 1602708 | 3.9949 | |
|
| 3.9067 | 1.03 | 1679028 | 3.9922 | |
|
| 3.9079 | 0.03 | 1755348 | 3.9896 | |
|
| 3.904 | 1.03 | 1831668 | 3.9874 | |
|
| 3.897 | 0.03 | 1907988 | 3.9856 | |
|
| 3.8901 | 1.03 | 1984308 | 3.9840 | |
|
| 3.8877 | 0.03 | 2060628 | 3.9821 | |
|
| 3.8779 | 1.03 | 2136948 | 3.9806 | |
|
| 3.8775 | 0.03 | 2213268 | 3.9794 | |
|
| 3.8691 | 0.03 | 2289588 | 3.9782 | |
|
| 3.8627 | 0.03 | 2365908 | 3.9771 | |
|
| 3.8616 | 1.03 | 2442228 | 3.9761 | |
|
| 3.8491 | 0.03 | 2518548 | 3.9749 | |
|
| 3.848 | 1.03 | 2594868 | 3.9738 | |
|
| 3.8413 | 0.03 | 2671188 | 3.9730 | |
|
| 3.842 | 1.03 | 2747508 | 3.9721 | |
|
| 3.8472 | 0.03 | 2823828 | 3.9715 | |
|
| 3.8438 | 1.03 | 2900148 | 3.9710 | |
|
| 3.85 | 0.03 | 2976468 | 3.9706 | |
|
| 3.8505 | 0.02 | 3052726 | 3.9704 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.33.3 |
|
- Pytorch 2.0.1 |
|
- Datasets 2.12.0 |
|
- Tokenizers 0.13.3 |
|
|