--- library_name: peft tags: - generated_from_trainer metrics: - accuracy base_model: meta-llama/Llama-2-7b-hf model-index: - name: llama2-7B-ReqORNot results: [] --- # llama2-7B-ReqORNot This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2597 - Accuracy: 0.8970 - Weighted precision: 0.8971 - Weighted recall: 0.8970 - Weighted f1: 0.8971 - Macro precision: 0.8969 - Macro recall: 0.8971 - Macro f1: 0.8970 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted precision | Weighted recall | Weighted f1 | Macro precision | Macro recall | Macro f1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------------------:|:---------------:|:-----------:|:---------------:|:------------:|:--------:| | No log | 1.0 | 237 | 0.4807 | 0.7896 | 0.7895 | 0.7896 | 0.7895 | 0.7894 | 0.7891 | 0.7892 | | No log | 2.0 | 474 | 0.3167 | 0.8605 | 0.8605 | 0.8605 | 0.8605 | 0.8604 | 0.8604 | 0.8604 | | 0.5108 | 3.0 | 711 | 0.2709 | 0.8860 | 0.8869 | 0.8860 | 0.8860 | 0.8862 | 0.8866 | 0.8860 | | 0.5108 | 4.0 | 948 | 0.2704 | 0.8880 | 0.8889 | 0.8880 | 0.8879 | 0.8894 | 0.8871 | 0.8876 | | 0.1829 | 5.0 | 1185 | 0.2597 | 0.8970 | 0.8971 | 0.8970 | 0.8971 | 0.8969 | 0.8971 | 0.8970 | ### Framework versions - PEFT 0.9.0 - Transformers 4.38.2 - Pytorch 2.1.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2