metadata
library_name: transformers
license: llama3.2
base_model: tanliboy/llama-3.2-3b
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- alignment-handbook
- generated_from_trainer
datasets:
- tanliboy/OpenHermes-2.5-reformat
model-index:
- name: llama-3.2-3b-sft
results: []
llama-3.2-3b-sft
This model is a fine-tuned version of tanliboy/llama-3.2-3b on the tanliboy/OpenHermes-2.5-reformat dataset. It achieves the following results on the evaluation set:
- Loss: 0.7216
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.8741 | 0.0448 | 100 | 0.8600 |
0.8038 | 0.0897 | 200 | 0.8095 |
0.7937 | 0.1345 | 300 | 0.7789 |
0.7712 | 0.1794 | 400 | 0.7644 |
0.7393 | 0.2242 | 500 | 0.7565 |
0.7458 | 0.2691 | 600 | 0.7506 |
0.7694 | 0.3139 | 700 | 0.7458 |
0.713 | 0.3587 | 800 | 0.7422 |
0.7347 | 0.4036 | 900 | 0.7387 |
0.7243 | 0.4484 | 1000 | 0.7356 |
0.7161 | 0.4933 | 1100 | 0.7331 |
0.7247 | 0.5381 | 1200 | 0.7308 |
0.7477 | 0.5830 | 1300 | 0.7288 |
0.7429 | 0.6278 | 1400 | 0.7273 |
0.7317 | 0.6726 | 1500 | 0.7256 |
0.7226 | 0.7175 | 1600 | 0.7243 |
0.695 | 0.7623 | 1700 | 0.7234 |
0.7167 | 0.8072 | 1800 | 0.7226 |
0.686 | 0.8520 | 1900 | 0.7221 |
0.7214 | 0.8969 | 2000 | 0.7218 |
0.7358 | 0.9417 | 2100 | 0.7216 |
0.7259 | 0.9865 | 2200 | 0.7216 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1