metadata
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
- GaetanMichelet/chat-60_ft_task-1
- GaetanMichelet/chat-120_ft_task-1
- GaetanMichelet/chat-180_ft_task-1
library_name: peft
license: llama3.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-31-8B_task-1_180-samples_config-4
results: []
Llama-31-8B_task-1_180-samples_config-4
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-1, the GaetanMichelet/chat-120_ft_task-1 and the GaetanMichelet/chat-180_ft_task-1 datasets. It achieves the following results on the evaluation set:
- Loss: 1.2589
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 150
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.0972 | 0.9412 | 8 | 2.0718 |
2.0234 | 2.0 | 17 | 2.0545 |
2.0324 | 2.9412 | 25 | 2.0288 |
2.0064 | 4.0 | 34 | 1.9798 |
1.9611 | 4.9412 | 42 | 1.9139 |
1.8283 | 6.0 | 51 | 1.8090 |
1.6817 | 6.9412 | 59 | 1.7011 |
1.5762 | 8.0 | 68 | 1.6085 |
1.5529 | 8.9412 | 76 | 1.5659 |
1.4817 | 10.0 | 85 | 1.5206 |
1.5125 | 10.9412 | 93 | 1.4816 |
1.3226 | 12.0 | 102 | 1.4352 |
1.3823 | 12.9412 | 110 | 1.3951 |
1.2564 | 14.0 | 119 | 1.3580 |
1.1936 | 14.9412 | 127 | 1.3305 |
1.2322 | 16.0 | 136 | 1.3061 |
1.1389 | 16.9412 | 144 | 1.2910 |
1.2119 | 18.0 | 153 | 1.2775 |
1.0796 | 18.9412 | 161 | 1.2672 |
1.088 | 20.0 | 170 | 1.2627 |
1.0344 | 20.9412 | 178 | 1.2631 |
1.0175 | 22.0 | 187 | 1.2589 |
0.9509 | 22.9412 | 195 | 1.2707 |
0.8574 | 24.0 | 204 | 1.2784 |
0.8673 | 24.9412 | 212 | 1.2985 |
0.8657 | 26.0 | 221 | 1.3300 |
0.7453 | 26.9412 | 229 | 1.3725 |
0.7771 | 28.0 | 238 | 1.3823 |
0.6941 | 28.9412 | 246 | 1.4508 |
Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.1.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1