File size: 3,312 Bytes
2fbbe3c 52bfc63 2fbbe3c 52bfc63 2fbbe3c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
---
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: llama-3-8b-local-definitivo
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-3-8b-local-definitivo
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8917 | 0.03 | 25 | 1.7768 |
| 1.7137 | 0.05 | 50 | 1.7402 |
| 1.7492 | 0.08 | 75 | 1.7194 |
| 1.6895 | 0.1 | 100 | 1.7036 |
| 1.7168 | 0.13 | 125 | 1.6899 |
| 1.7184 | 0.15 | 150 | 1.6784 |
| 1.7081 | 0.18 | 175 | 1.6700 |
| 1.7245 | 0.21 | 200 | 1.6555 |
| 1.7603 | 0.23 | 225 | 1.6453 |
| 1.6707 | 0.26 | 250 | 1.6344 |
| 1.7224 | 0.28 | 275 | 1.6233 |
| 1.7112 | 0.31 | 300 | 1.6178 |
| 1.7531 | 0.34 | 325 | 1.6067 |
| 1.6894 | 0.36 | 350 | 1.5967 |
| 1.609 | 0.39 | 375 | 1.5895 |
| 1.6563 | 0.41 | 400 | 1.5818 |
| 1.5761 | 0.44 | 425 | 1.5744 |
| 1.6282 | 0.46 | 450 | 1.5630 |
| 1.637 | 0.49 | 475 | 1.5567 |
| 1.6759 | 0.52 | 500 | 1.5497 |
| 1.577 | 0.54 | 525 | 1.5402 |
| 1.6314 | 0.57 | 550 | 1.5334 |
| 1.6907 | 0.59 | 575 | 1.5297 |
| 1.5755 | 0.62 | 600 | 1.5207 |
| 1.5822 | 0.64 | 625 | 1.5163 |
| 1.549 | 0.67 | 650 | 1.5088 |
| 1.5865 | 0.7 | 675 | 1.5012 |
| 1.6242 | 0.72 | 700 | 1.4994 |
| 1.5511 | 0.75 | 725 | 1.4916 |
| 1.6663 | 0.77 | 750 | 1.4880 |
| 1.6563 | 0.8 | 775 | 1.4847 |
| 1.6347 | 0.83 | 800 | 1.4826 |
| 1.6682 | 0.85 | 825 | 1.4779 |
| 1.6995 | 0.88 | 850 | 1.4738 |
| 1.6295 | 0.9 | 875 | 1.4711 |
| 1.6469 | 0.93 | 900 | 1.4686 |
| 1.5073 | 0.95 | 925 | 1.4663 |
| 1.5953 | 0.98 | 950 | 1.4655 |
### Framework versions
- Transformers 4.34.0
- Pytorch 2.1.2+cu121
- Datasets 2.12.0
- Tokenizers 0.14.1
|