Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
sfulay
/
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
like
0
Safetensors
mistral
trl
dpo
alignment-handbook
Generated from Trainer
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
4d79451
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
/
training_args.bin
Commit History
Model save
4d79451
verified
sfulay
commited on
19 days ago
Model save
0a3e251
verified
sfulay
commited on
20 days ago
Training in progress, step 100
242749d
verified
sfulay
commited on
25 days ago