DandinPower's picture
End of training
58d9a19 verified
|
raw
history blame
2.4 kB
metadata
language:
  - zh
license: apache-2.0
library_name: peft
tags:
  - trl
  - sft
  - nycu-112-2-deeplearning-hw2
  - generated_from_trainer
base_model: MediaTek-Research/Breeze-7B-Instruct-v1_0
datasets:
  - DandinPower/ZH-Reading-Comprehension-Breeze-Instruct
model-index:
  - name: breeze_7b_lora_completion_only_5_epochs
    results: []

breeze_7b_lora_completion_only_5_epochs

This model is a fine-tuned version of MediaTek-Research/Breeze-7B-Instruct-v1_0 on the DandinPower/ZH-Reading-Comprehension-Breeze-Instruct dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1658

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 2
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • total_eval_batch_size: 2
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 700
  • num_epochs: 5.0

Training results

Training Loss Epoch Step Validation Loss
0.1419 0.3690 250 0.1250
0.1404 0.7380 500 0.1611
0.1554 1.1070 750 0.1358
0.1426 1.4760 1000 0.1543
0.1194 1.8450 1250 0.1823
0.0865 2.2140 1500 0.1511
0.0728 2.5830 1750 0.1463
0.4116 2.9520 2000 0.1224
0.0405 3.3210 2250 0.1939
0.0573 3.6900 2500 0.1324
0.0237 4.0590 2750 0.1657
0.0208 4.4280 3000 0.1818
0.0111 4.7970 3250 0.1658

Framework versions

  • PEFT 0.10.0
  • Transformers 4.40.0
  • Pytorch 2.2.2+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1