Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
sfulay
/
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
like
0
Safetensors
mistral
trl
dpo
alignment-handbook
Generated from Trainer
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
main
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
/
model-00003-of-00003.safetensors
Commit History
Model save
9b16a50
verified
sfulay
commited on
Sep 2
Model save
4d79451
verified
sfulay
commited on
Sep 2
Model save
0a3e251
verified
sfulay
commited on
Sep 2
Model save
44bba64
verified
sfulay
commited on
Aug 28
Training in progress, step 400
374e156
verified
sfulay
commited on
Aug 28
Training in progress, step 300
2c7d48d
verified
sfulay
commited on
Aug 28
Training in progress, step 200
133d893
verified
sfulay
commited on
Aug 28
Training in progress, step 100
242749d
verified
sfulay
commited on
Aug 28