Edit model card

model_shp2_dpo1

This model is a fine-tuned version of meta-llama/Llama-2-7b-chat-hf on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3878
  • Rewards/chosen: -7.7597
  • Rewards/rejected: -8.0105
  • Rewards/accuracies: 0.6100
  • Rewards/margins: 0.2509
  • Logps/rejected: -290.7956
  • Logps/chosen: -314.9470
  • Logits/rejected: -1.2493
  • Logits/chosen: -1.2818

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 4
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Rewards/chosen Rewards/rejected Rewards/accuracies Rewards/margins Logps/rejected Logps/chosen Logits/rejected Logits/chosen
0.0884 2.67 100 0.9487 -2.0234 -2.1729 0.5500 0.1495 -232.4193 -257.5841 -1.3214 -1.2894
0.0009 5.33 200 1.4986 -7.8348 -7.9036 0.5200 0.0687 -289.7258 -315.6984 -1.3177 -1.3419
0.0001 8.0 300 1.3323 -7.1704 -7.4119 0.6100 0.2415 -284.8095 -309.0548 -1.2674 -1.2968
0.0001 10.67 400 1.3579 -7.4927 -7.7408 0.6100 0.2481 -288.0981 -312.2774 -1.2590 -1.2900
0.0001 13.33 500 1.3799 -7.6344 -7.8716 0.6000 0.2372 -289.4062 -313.6946 -1.2541 -1.2860
0.0001 16.0 600 1.3885 -7.7023 -7.9449 0.5900 0.2425 -290.1390 -314.3737 -1.2519 -1.2836
0.0001 18.67 700 1.3971 -7.7545 -7.9878 0.6100 0.2332 -290.5677 -314.8956 -1.2500 -1.2826
0.0001 21.33 800 1.3951 -7.7604 -8.0061 0.6000 0.2458 -290.7514 -314.9539 -1.2490 -1.2817
0.0001 24.0 900 1.3904 -7.7591 -8.0015 0.6100 0.2424 -290.7051 -314.9411 -1.2491 -1.2818
0.0001 26.67 1000 1.3878 -7.7597 -8.0105 0.6100 0.2509 -290.7956 -314.9470 -1.2493 -1.2818

Framework versions

  • PEFT 0.10.0
  • Transformers 4.39.1
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
6
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for guoyu-zhang/model_shp2_dpo1

Adapter
(1037)
this model