Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
mlx-community
/
Nous-Hermes-2-Mixtral-8x7B-DPO-4bit
like
18
Follow
MLX Community
2,298
MLX
English
mixtral
Mixtral
instruct
finetune
chatml
DPO
RLHF
gpt4
synthetic data
distillation
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Train
Use this model
main
Nous-Hermes-2-Mixtral-8x7B-DPO-4bit
Commit History
Update README.md
f1d96ff
verified
thomadev0
commited on
Jan 17
Update README.md
695f3cb
verified
thomadev0
commited on
Jan 17
Update README.md
ab34c2d
verified
thomadev0
commited on
Jan 17
Update README.md
db61ade
verified
thomadev0
commited on
Jan 17
Update README.md
56d7451
verified
thomadev0
commited on
Jan 17
Add prompt example
0dbbb82
verified
thomadev0
commited on
Jan 17
Upload folder using huggingface_hub
c77f0fc
verified
thomadev0
commited on
Jan 17
initial commit
dfdb0ce
verified
thomadev0
commited on
Jan 17