--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: meta-llama/Meta-Llama-3-8B-Instruct model-index: - name: LLAMA3-8BI-APPS results: [] datasets: - codeparrot/apps metrics: - accuracy - bleu - rouge pipeline_tag: text-generation --- # LLAMA3-8BI-APPS This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 1000 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.9027 | 0.1 | 100 | 0.9320 | | 0.8632 | 0.2 | 200 | 0.9143 | | 0.8572 | 0.3 | 300 | 1.0150 | | 0.937 | 0.4 | 400 | 1.0545 | | 1.0336 | 0.5 | 500 | 1.1029 | | 1.0056 | 0.6 | 600 | 1.1267 | | 1.0125 | 0.7 | 700 | 1.1307 | | 1.028 | 0.8 | 800 | 1.1398 | | 1.0692 | 0.9 | 900 | 1.1482 | | 1.0361 | 1.0 | 1000 | 1.1490 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2