Edit model card

Meta-Llama-3-8B_alpaca-clean_l0.0002_64

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0132

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 0
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.03
  • training_steps: 10000

Training results

Training Loss Epoch Step Validation Loss
1.1179 0.0003 1 2.6461
2.1519 0.0590 187 1.7584
1.4299 0.1179 374 1.7539
1.1554 0.1769 561 1.7082
2.086 0.2359 748 1.7446
1.76 0.2949 935 1.6987
1.305 0.3538 1122 1.6802
1.1828 0.4128 1309 1.6931
2.0012 0.4718 1496 1.6620
1.6794 0.5307 1683 1.6657
1.2777 0.5897 1870 1.6543
1.1958 0.6487 2057 1.6923
2.4625 0.7077 2244 1.6584
1.7427 0.7666 2431 1.6458
1.1636 0.8256 2618 1.6477
1.1374 0.8846 2805 1.6614
2.2849 0.9436 2992 1.6419
1.0011 1.0025 3179 1.6409
2.5839 1.0615 3366 1.7018
1.334 1.1205 3553 1.6859
1.1876 1.1794 3740 1.6755
0.9747 1.2384 3927 1.6887
2.0187 1.2974 4114 1.6922
1.0511 1.3564 4301 1.6827
1.0223 1.4153 4488 1.6729
1.1295 1.4743 4675 1.6849
2.0358 1.5333 4862 1.6879
1.3046 1.5922 5049 1.6772
1.0023 1.6512 5236 1.6774
1.0365 1.7102 5423 1.6905
1.8732 1.7692 5610 1.6834
1.0398 1.8281 5797 1.6690
1.0103 1.8871 5984 1.6662
2.3888 1.9461 6171 1.6739
0.7728 2.0050 6358 1.6986
0.8759 2.0640 6545 1.8575
1.3133 2.1230 6732 1.8525
0.8286 2.1820 6919 1.7958
0.9336 2.2409 7106 1.7920
1.0528 2.2999 7293 1.9157
1.1672 2.3589 7480 1.8295
0.9818 2.4178 7667 1.7832
0.92 2.4768 7854 1.7895
1.1814 2.5358 8041 1.8489
1.3869 2.5948 8228 1.8023
0.8245 2.6537 8415 1.7785
0.8234 2.7127 8602 1.7827
1.6518 2.7717 8789 1.8250
1.1769 2.8307 8976 1.8055
0.881 2.8896 9163 1.7741
0.8681 2.9486 9350 1.7973
0.5482 3.0076 9537 1.9381
0.6616 3.0665 9724 1.9542
1.4274 3.1255 9911 2.0250

Framework versions

  • PEFT 0.12.1.dev0
  • Transformers 4.45.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for alexander-hm/Meta-Llama-3-8B_alpaca-clean_l0.0002_64

Adapter
(508)
this model