metadata
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
datasets:
- GaetanMichelet/chat-60_ft_task-1
- GaetanMichelet/chat-120_ft_task-1
- GaetanMichelet/chat-180_ft_task-1
library_name: peft
license: llama3.1
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
model-index:
- name: Llama-31-8B_task-1_180-samples_config-3_full
results: []
Llama-31-8B_task-1_180-samples_config-3_full
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the GaetanMichelet/chat-60_ft_task-1, the GaetanMichelet/chat-120_ft_task-1 and the GaetanMichelet/chat-180_ft_task-1 datasets. It achieves the following results on the evaluation set:
- Loss: 0.8992
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 150
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
2.4542 | 1.0 | 17 | 2.4259 |
2.4022 | 2.0 | 34 | 2.3882 |
2.3317 | 3.0 | 51 | 2.3140 |
2.2607 | 4.0 | 68 | 2.2050 |
2.1352 | 5.0 | 85 | 2.0643 |
1.9456 | 6.0 | 102 | 1.8885 |
1.7528 | 7.0 | 119 | 1.7025 |
1.4935 | 8.0 | 136 | 1.4674 |
1.2733 | 9.0 | 153 | 1.2421 |
1.1154 | 10.0 | 170 | 1.1134 |
1.1202 | 11.0 | 187 | 1.0689 |
0.9449 | 12.0 | 204 | 1.0450 |
0.9973 | 13.0 | 221 | 1.0253 |
1.0562 | 14.0 | 238 | 1.0091 |
0.9947 | 15.0 | 255 | 0.9928 |
1.0096 | 16.0 | 272 | 0.9804 |
0.9222 | 17.0 | 289 | 0.9692 |
0.8838 | 18.0 | 306 | 0.9603 |
0.8942 | 19.0 | 323 | 0.9511 |
0.9058 | 20.0 | 340 | 0.9432 |
0.8837 | 21.0 | 357 | 0.9354 |
0.795 | 22.0 | 374 | 0.9315 |
0.8395 | 23.0 | 391 | 0.9243 |
0.8308 | 24.0 | 408 | 0.9169 |
0.7863 | 25.0 | 425 | 0.9138 |
0.7468 | 26.0 | 442 | 0.9068 |
0.7658 | 27.0 | 459 | 0.9008 |
0.7128 | 28.0 | 476 | 0.8992 |
0.6474 | 29.0 | 493 | 0.9064 |
0.6387 | 30.0 | 510 | 0.9089 |
0.6846 | 31.0 | 527 | 0.9096 |
0.6424 | 32.0 | 544 | 0.9173 |
0.6598 | 33.0 | 561 | 0.9238 |
0.6634 | 34.0 | 578 | 0.9290 |
0.5893 | 35.0 | 595 | 0.9400 |
Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.1.2+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1