Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
sfulay
/
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
like
0
Safetensors
mistral
trl
dpo
alignment-handbook
Generated from Trainer
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
main
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
/
training_args.bin
Commit History
Model save
9b16a50
verified
sfulay
commited on
Sep 2
Model save
4d79451
verified
sfulay
commited on
Sep 2
Model save
0a3e251
verified
sfulay
commited on
Sep 2
Training in progress, step 100
242749d
verified
sfulay
commited on
Aug 28