zephyr_0.05 / README.md
chujiezheng's picture
Create README.md
1d428f1 verified
|
raw
history blame contribute delete
No virus
288 Bytes
metadata
license: apache-2.0
language:
  - en

zephyr_0.05

The DPO-trained model from alignment-handbook/zephyr-7b-sft-full using 5% data of HuggingFaceH4/ultrafeedback_binarized, as in the "Weak-to-Strong Extrapolation Expedites Alignment" paper.