Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
sfulay
/
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
like
0
Safetensors
mistral
trl
dpo
alignment-handbook
Generated from Trainer
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
4d79451
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
/
all_results.json
Commit History
Model save
4d79451
verified
sfulay
commited on
19 days ago
Model save
0a3e251
verified
sfulay
commited on
20 days ago
Model save
44bba64
verified
sfulay
commited on
24 days ago