viethoangtranduong
commited on
Commit
•
b6c497a
1
Parent(s):
5122f72
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,66 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- openbmb/UltraFeedback
|
5 |
+
---
|
6 |
+
#### Dataset and Process:
|
7 |
+
- **Dataset**: ONLY the prompts from [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized); **no external LLM responses used**.
|
8 |
+
|
9 |
+
- **Methodology:**:
|
10 |
+
1. Generate five response variations for each prompt from a subset of 20,000 using the LLM - to start, we used [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
|
11 |
+
2. Apply [PairRM](https://huggingface.co/llm-blender/PairRM) for response reranking.
|
12 |
+
3. Update the LLM by applying Direct Preference Optimization (DPO) on the top (chosen) and bottom (rejected) responses.
|
13 |
+
4. Use this LLM as the base model for the next iteration, repeating three times in total.
|
14 |
+
|
15 |
+
#### Key Premises:
|
16 |
+
- **Specialization Requirement**: In enterprises, you will have very specific advanced alignment axes, where your LLMs currently do not have such awareness yet.
|
17 |
+
- **Ease of Model Building**: Creating ranking/scoring/classification models is simpler than developing high-quality, manually annotated datasets for long-form responses.
|
18 |
+
- **Programmatic Alignment**: Using smaller but specialized teacher models (reward models) can incrementally align LLMs towards specific axes. We call this **Programmatic Alignment** - using programmatic, weak signals to guide your LLM improvement. Multiple reward models can be scaled to different axes as required.
|
19 |
+
|
20 |
+
#### Contemporary Work and Acknowledgements:
|
21 |
+
We would also like to acknowledge contemporary work published a few days ago by Meta & NYU in a paper called [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020),
|
22 |
+
which proposes a similar approach for creating alignment pairs from a larger set of candidate responses but using their LLM as the reward model.
|
23 |
+
While this may work for general-purpose models, our experience has shown that task-specific reward models guided by SMEs are necessary for
|
24 |
+
most enterprise applications of LLMs to specific use cases.
|
25 |
+
|
26 |
+
#### Applications:
|
27 |
+
Unlike our customers, who have very specific use cases to align LLMs to,
|
28 |
+
the AlpacaEval 2.0 leaderboard measures the ability of LLMS to follow general user instructions.
|
29 |
+
Thus, for this demonstration, we use a general-purpose reward model - the performant [PairRM model](https://huggingface.co/llm-blender/PairRM) [citation].
|
30 |
+
We use the [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) model as our base LLM.
|
31 |
+
|
32 |
+
With this demonstration, we focus on the general approach of programmatic alignment.
|
33 |
+
|
34 |
+
For interest in building your **specialized internal reward models
|
35 |
+
that reflect your enterprises' needs**, please contact the Snorkel team or consider attending the
|
36 |
+
**[Enterprise LLM Summit: Building GenAI with Your Data](https://snorkel.ai/event/enterprise-llm-summit/)**
|
37 |
+
to learn more about "Programmatically scale human preferences and alignment in GenAI".
|
38 |
+
|
39 |
+
|
40 |
+
#### Result:
|
41 |
+
- This model scored **30.2** on [Alpaca-Eval 2.0](https://tatsu-lab.github.io/alpaca_eval/) - ranked #4 and the highest for an open source base model at the time of publication.
|
42 |
+
- Utilizing the model with PairRM, which involved generating 16 responses and submitting the highest-scoring one by PairRM, we scored **34.86** - ranked #2.
|
43 |
+
The best model on the leaderboard is "gpt-4-turbo".
|
44 |
+
|
45 |
+
We acknowledge that Alpaca-Eval 2.0 is not the full reflection of LLMs' performances.
|
46 |
+
However, in this work, as we are aligning toward general "human preferences", this benchmark serves as a compatible, representative benchmark.
|
47 |
+
We expect more word on new alignment axes from the community and perform evaluation on other suitable benchmarks.
|
48 |
+
|
49 |
+
We recognize that the Alpaca-Eval 2.0 benchmark does not entirely capture the full range of capabilities and performances of LLMs.
|
50 |
+
However, in our current work, where the goal is to align with general "human preferences," Alpaca-Eval 2.0 serves as a suitable and representative benchmark.
|
51 |
+
Moving forward, we anticipate further contributions from the community regarding new alignment axes, and conduct evaluations using other appropriate benchmarks.
|
52 |
+
|
53 |
+
## Limitations:
|
54 |
+
The model is a quick demonstration that the LLMs can be programmatically aligned using smaller specialized reward models.
|
55 |
+
It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
|
56 |
+
make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
|
57 |
+
|
58 |
+
|
59 |
+
## Acknowledgments
|
60 |
+
- The Mistral AI Team for developing and releasing the advanced Mistral-7B-Instruct-v0.2 model.
|
61 |
+
- The author of the [Direct Preference Optimization paper](https://arxiv.org/abs/2305.18290) for the innovative approach
|
62 |
+
- The author of the [Pairwise Reward Model for LLMs paper](https://arxiv.org/abs/2306.02561) for the powerful general-purpose reward model
|
63 |
+
- The HuggingFace team for the DPO implementation under [The Alignment Handbook](https://github.com/huggingface/alignment-handbook)
|
64 |
+
|
65 |
+
## The Snorkel AI Team
|
66 |
+
Hoang Tran, Chris Glaze, Braden Hancock, Alex Ratner
|