Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
alignment-handbook
's Collections
Handbook v0.1 models and datasets
DPO vs KTO vs IPO
Constitutional AI
Handbook v0.1 models and datasets
updated
Nov 10, 2023
Models and datasets for v0.1 of the alignment handbook
Upvote
24
+14
alignment-handbook/zephyr-7b-sft-full
Text Generation
•
Updated
Jan 10
•
20k
•
22
alignment-handbook/zephyr-7b-sft-qlora
Updated
Jan 9
•
389
•
7
alignment-handbook/zephyr-7b-dpo-full
Text Generation
•
Updated
Jan 10
•
233
•
3
alignment-handbook/zephyr-7b-dpo-qlora
Updated
Jan 9
•
78
•
9
HuggingFaceH4/ultrachat_200k
Viewer
•
Updated
Oct 16
•
515k
•
12.6k
•
477
HuggingFaceH4/ultrafeedback_binarized
Viewer
•
Updated
Oct 16
•
187k
•
6.29k
•
244
Upvote
24
+20
Share collection
View history
Collection guide
Browse collections