Critique-out-Loud Reward Models Collection Paper: https://arxiv.org/abs/2408.11791 | Code: https://github.com/zankner/CLoud • 7 items • Updated Sep 5 • 3
Tulu V2.5 Suite Collection A suite of models trained using DPO and PPO across a wide variety (up to 14) of preference datasets. See https://arxiv.org/abs/2406.09279 for more! • 44 items • Updated 7 days ago • 14
The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models Paper • 2406.05761 • Published Jun 9 • 2
Awesome feedback datasets Collection A curated list of datasets with human or AI feedback. Useful for training reward models or applying techniques like DPO. • 19 items • Updated Apr 12 • 65
Prometheus 2 Collection Quantized versions of Prometheus 2 - an alternative of GPT-4 evaluation when doing fine-grained evaluation of an underlying LLM. • 2 items • Updated May 8 • 1
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models Paper • 2405.01535 • Published May 2 • 118
The Feedback Collection Collection Dataset and Model for "Prometheus: Inducing Fine-grained Evaluation Capability in Language Models" • 6 items • Updated Nov 12, 2023 • 3
CLIcK: A Benchmark Dataset of Cultural and Linguistic Intelligence in Korean Paper • 2403.06412 • Published Mar 11 • 3
FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets Paper • 2307.10928 • Published Jul 20, 2023 • 12