--- license: llama3 library_name: peft tags: - trl - sft - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: experiments results: [] --- # experiments This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6596 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 20 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.7073 | 0.4162 | 50 | 1.5993 | | 1.4004 | 0.8325 | 100 | 1.4527 | | 1.3051 | 1.2487 | 150 | 1.4122 | | 1.2396 | 1.6649 | 200 | 1.3871 | | 1.2044 | 2.0812 | 250 | 1.3906 | | 1.1019 | 2.4974 | 300 | 1.3775 | | 1.2682 | 2.9136 | 350 | 1.3649 | | 1.1681 | 3.3299 | 400 | 1.4233 | | 1.1343 | 3.7461 | 450 | 1.4160 | | 0.7987 | 4.1623 | 500 | 1.4964 | | 0.8663 | 4.5786 | 550 | 1.5011 | | 0.7473 | 4.9948 | 600 | 1.4845 | | 0.7386 | 5.4110 | 650 | 1.5706 | | 0.61 | 5.8273 | 700 | 1.5695 | | 0.4689 | 6.2435 | 750 | 1.6596 | ### Framework versions - PEFT 0.11.1 - Transformers 4.41.2 - Pytorch 1.13.1+cu117 - Datasets 2.19.2 - Tokenizers 0.19.1