Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
alignment-handbook
's Collections
Handbook v0.1 models and datasets
DPO vs KTO vs IPO
Constitutional AI
Handbook v0.1 models and datasets
updated
Nov 10, 2023
Models and datasets for v0.1 of the alignment handbook
Upvote
24
+14
alignment-handbook/zephyr-7b-sft-full
Text Generation
•
Updated
Jan 10
•
29.8k
•
22
alignment-handbook/zephyr-7b-sft-qlora
Updated
Jan 9
•
416
•
7
alignment-handbook/zephyr-7b-dpo-full
Text Generation
•
Updated
Jan 10
•
378
•
3
alignment-handbook/zephyr-7b-dpo-qlora
Updated
Jan 9
•
59
•
9
HuggingFaceH4/ultrachat_200k
Viewer
•
Updated
20 days ago
•
515k
•
12.9k
•
473
HuggingFaceH4/ultrafeedback_binarized
Viewer
•
Updated
20 days ago
•
187k
•
6.16k
•
234
Upvote
24
+20
Share collection
View history
Collection guide
Browse collections