Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
RedaAlami
/
zephyr-7b-gemma-dpo
like
0
PEFT
TensorBoard
Safetensors
RedaAlami/PKU-SafeRLHF-Processed
gemma
alignment-handbook
trl
dpo
Generated from Trainer
4-bit precision
bitsandbytes
License:
other
Model card
Files
Files and versions
Metrics
Training metrics
Community
Train
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
Welcome to the community
The community tab is the place to discuss and collaborate with the HF community!