bradenjh commited on
Commit
30b5d17
1 Parent(s): 6371b57

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -15
README.md CHANGED
@@ -16,18 +16,15 @@ ONLY the prompts from [UltraFeedback](https://huggingface.co/datasets/HuggingFac
16
  4. Use this LLM as the base model for the next iteration, repeating three times in total.
17
 
18
  This overview provides a high-level summary of our approach.
19
- We plan to release more detailed results and findings in the coming weeks on [Snorkel blogs](https://snorkel.ai/blog/).
20
 
21
  #### Key Premises:
22
- - **Specialization Requirement**: In enterprises, you will have very specific advanced alignment axes, where your LLMs currently do not have such awareness yet.
23
  - **Ease of Model Building**: Creating ranking/scoring/classification models is simpler than developing high-quality, manually annotated datasets for long-form responses.
24
- - **Programmatic Alignment**: Using smaller but specialized teacher models (reward models) can incrementally align LLMs towards specific axes. We call this **Programmatic Alignment** - using programmatic, weak signals to guide your LLM improvement. Multiple reward models can be scaled to different axes as required.
25
 
26
  #### Contemporary Work and Acknowledgements:
27
- We would also like to acknowledge contemporary work published a few days ago by Meta & NYU in a paper called [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020),
28
- which proposes a similar approach for creating alignment pairs from a larger set of candidate responses but using their LLM as the reward model.
29
- While this may work for general-purpose models, our experience has shown that task-specific reward models guided by SMEs are necessary for
30
- most enterprise applications of LLMs to specific use cases.
31
 
32
  #### Applications:
33
  Unlike our customers, who have very specific use cases to align LLMs to,
@@ -38,10 +35,9 @@ We use the [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7
38
  With this demonstration, we focus on the general approach of programmatic alignment.
39
 
40
  For interest in building your **specialized internal reward models
41
- that reflect your enterprises' needs**, please contact the Snorkel team or consider attending our
42
  [**Enterprise LLM Summit: Building GenAI with Your Data on January 25, 2024**](https://snorkel.ai/event/enterprise-llm-summit/)
43
- to learn more about "Programmatically scale human preferences and alignment in GenAI".
44
-
45
 
46
  #### Result:
47
  On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/):
@@ -49,7 +45,7 @@ On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/):
49
  After applying the above methodology:
50
  - This model scored **30.2** - ranked 3rd and the highest for an open-source base model at the time of publication.
51
  - Utilizing the model with PairRM, which involved generating 16 responses and submitting the highest-scoring response by PairRM, we scored **34.86** - ranked 2nd.
52
- The best model on the leaderboard is "gpt-4-turbo".
53
 
54
  We recognize that the Alpaca-Eval 2.0 benchmark does not entirely capture the full range of capabilities and performances of LLMs.
55
  However, in our current work, where the goal is to align with general "human preferences," Alpaca-Eval 2.0 serves as a suitable and representative benchmark.
@@ -57,14 +53,19 @@ Moving forward, we anticipate further contributions from the community regarding
57
 
58
  ## Limitations:
59
  The model is a quick demonstration that the LLMs can be programmatically aligned using smaller specialized reward models.
60
- It does not have any moderation mechanisms. We're looking forward to engaging with the community on ways to
61
- make the model finely respect guardrails, allowing for deployment in environments requiring moderated outputs.
 
62
 
63
- ## Acknowledgments
64
  - The Mistral AI Team for developing and releasing the advanced Mistral-7B-Instruct-v0.2 model.
65
  - The author of the [Direct Preference Optimization paper](https://arxiv.org/abs/2305.18290) for the innovative approach
66
  - The author of the [Pairwise Reward Model for LLMs paper](https://arxiv.org/abs/2306.02561) for the powerful general-purpose reward model
67
  - The HuggingFace team for the DPO implementation under [The Alignment Handbook](https://github.com/huggingface/alignment-handbook)
 
 
 
 
68
 
69
  ## The Snorkel AI Team
70
- Hoang Tran, Chris Glaze, Braden Hancock, Alex Ratner
 
16
  4. Use this LLM as the base model for the next iteration, repeating three times in total.
17
 
18
  This overview provides a high-level summary of our approach.
19
+ We plan to release more detailed results and findings in the coming weeks on the [Snorkel blog](https://snorkel.ai/blog/).
20
 
21
  #### Key Premises:
22
+ - **Specialization Requirement**: For most enterprise use cases, using LLMs "off-the-shelf" falls short of production quality, necessitating additional fine-tuning and alignment.
23
  - **Ease of Model Building**: Creating ranking/scoring/classification models is simpler than developing high-quality, manually annotated datasets for long-form responses.
24
+ - **Programmatic Alignment**: Using smaller but specialized teacher models (reward models) can incrementally align LLMs towards specific axes. We call this **Programmatic Alignment** - capturing domain knowledge in programmatic forms that can be used to guide LLM improvement.
25
 
26
  #### Contemporary Work and Acknowledgements:
27
+
 
 
 
28
 
29
  #### Applications:
30
  Unlike our customers, who have very specific use cases to align LLMs to,
 
35
  With this demonstration, we focus on the general approach of programmatic alignment.
36
 
37
  For interest in building your **specialized internal reward models
38
+ that reflect your enterprises' needs**, please contact the Snorkel AI team or consider attending our
39
  [**Enterprise LLM Summit: Building GenAI with Your Data on January 25, 2024**](https://snorkel.ai/event/enterprise-llm-summit/)
40
+ to learn more about "Programmatically scaling human preferences and alignment in GenAI".
 
41
 
42
  #### Result:
43
  On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/):
 
45
  After applying the above methodology:
46
  - This model scored **30.2** - ranked 3rd and the highest for an open-source base model at the time of publication.
47
  - Utilizing the model with PairRM, which involved generating 16 responses and submitting the highest-scoring response by PairRM, we scored **34.86** - ranked 2nd.
48
+ The best model on the leaderboard is "gpt-4-turbo", which is also the judge of optimal responses.
49
 
50
  We recognize that the Alpaca-Eval 2.0 benchmark does not entirely capture the full range of capabilities and performances of LLMs.
51
  However, in our current work, where the goal is to align with general "human preferences," Alpaca-Eval 2.0 serves as a suitable and representative benchmark.
 
53
 
54
  ## Limitations:
55
  The model is a quick demonstration that the LLMs can be programmatically aligned using smaller specialized reward models.
56
+ It does not have any moderation mechanisms.
57
+ We look forward to continuing to engage with the research community and our customers exploring optimal methods for gettings models to respect guardrails,
58
+ allowing for deployment in environments requiring moderated outputs.
59
 
60
+ ## Acknowledgments:
61
  - The Mistral AI Team for developing and releasing the advanced Mistral-7B-Instruct-v0.2 model.
62
  - The author of the [Direct Preference Optimization paper](https://arxiv.org/abs/2305.18290) for the innovative approach
63
  - The author of the [Pairwise Reward Model for LLMs paper](https://arxiv.org/abs/2306.02561) for the powerful general-purpose reward model
64
  - The HuggingFace team for the DPO implementation under [The Alignment Handbook](https://github.com/huggingface/alignment-handbook)
65
+ - We would also like to acknowledge contemporary work published on arXiv a few days ago by Meta & NYU (Yuan, et al) in a paper called [Self-Rewarding Language Models](https://arxiv.org/abs/2401.10020),
66
+ which proposes a similar approach for creating alignment pairs from a larger set of candidate responses, but using the LLM as the reward model.
67
+ While this may work for general-purpose models, our experience has shown that task-specific reward models guided by SMEs are necessary for most
68
+ enterprise applications of LLMs for specific use cases, which is why we focus on the use of external reward models.
69
 
70
  ## The Snorkel AI Team
71
+ Hoang Tran, Chris Glaze, Braden Hancock