Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
sfulay
/
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
like
0
Safetensors
mistral
trl
dpo
alignment-handbook
Generated from Trainer
License:
apache-2.0
Model card
Files
Files and versions
Community
Train
9b16a50
zephyr-7b-dpo-full-gpt_consistent-reward-scale-1-rpo
/
tokenizer.json
sfulay
Training in progress, step 100
242749d
verified
24 days ago
raw
Copy download link
history
No virus
1.8 MB
File too large to display, you can
check the raw version
instead.