imdatta0's picture
End of training
91436b0 verified
metadata
license: llama2
library_name: peft
tags:
  - unsloth
  - generated_from_trainer
base_model: meta-llama/Llama-2-13b-hf
model-index:
  - name: llama_2_13b_Magiccoder_evol_10k_ortho_eye
    results: []

llama_2_13b_Magiccoder_evol_10k_ortho_eye

This model is a fine-tuned version of meta-llama/Llama-2-13b-hf on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.0987

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 0.02
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
1.2093 0.0262 4 1.2192
1.1816 0.0523 8 1.1667
1.1073 0.0785 12 1.1438
1.0563 0.1047 16 1.1321
1.0947 0.1308 20 1.1265
1.0662 0.1570 24 1.1237
1.0636 0.1832 28 1.1206
1.1163 0.2093 32 1.1180
1.0423 0.2355 36 1.1149
1.1226 0.2617 40 1.1148
1.0991 0.2878 44 1.1124
1.1427 0.3140 48 1.1119
1.1193 0.3401 52 1.1081
1.0908 0.3663 56 1.1086
1.1406 0.3925 60 1.1077
1.1366 0.4186 64 1.1063
1.0866 0.4448 68 1.1051
1.1035 0.4710 72 1.1026
1.0523 0.4971 76 1.1021
1.1381 0.5233 80 1.1018
1.1521 0.5495 84 1.1016
1.0932 0.5756 88 1.1014
1.0467 0.6018 92 1.0999
1.0459 0.6280 96 1.0992
1.1049 0.6541 100 1.0996
1.0905 0.6803 104 1.0993
1.0828 0.7065 108 1.0992
1.1141 0.7326 112 1.0990
1.0616 0.7588 116 1.0989
1.083 0.7850 120 1.0984
1.1081 0.8111 124 1.0982
1.069 0.8373 128 1.0984
1.0085 0.8635 132 1.0988
1.1178 0.8896 136 1.0987
1.0337 0.9158 140 1.0988
1.0825 0.9419 144 1.0987
1.0976 0.9681 148 1.0987
1.0909 0.9943 152 1.0987

Framework versions

  • PEFT 0.7.1
  • Transformers 4.40.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1