viethoangtranduong commited on
Commit
112d5dd
1 Parent(s): 7b7b07b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -13
README.md CHANGED
@@ -1,15 +1,17 @@
1
  ---
2
  license: apache-2.0
3
  datasets:
4
- - openbmb/UltraFeedback
5
  ---
6
 
7
  Original post: [Snorkel link]
8
 
9
- #### Dataset:
10
- ONLY the prompts from [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized); **no external LLM responses used**.
11
 
12
- #### Methodology:
 
 
13
  1. Generate five response variations for each prompt from a subset of 20,000 using the LLM - to start, we used [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
14
  2. Apply [PairRM](https://huggingface.co/llm-blender/PairRM) for response reranking.
15
  3. Update the LLM by applying Direct Preference Optimization (DPO) on the top (chosen) and bottom (rejected) responses.
@@ -18,15 +20,12 @@ ONLY the prompts from [UltraFeedback](https://huggingface.co/datasets/HuggingFac
18
  This overview provides a high-level summary of our approach.
19
  We plan to release more detailed results and findings in the coming weeks on the [Snorkel blog](https://snorkel.ai/blog/).
20
 
21
- #### Key Premises:
22
  - **Specialization Requirement**: For most enterprise use cases, using LLMs "off-the-shelf" falls short of production quality, necessitating additional fine-tuning and alignment.
23
  - **Ease of Model Building**: Creating ranking/scoring/classification models is simpler than developing high-quality, manually annotated datasets for long-form responses.
24
  - **Programmatic Alignment**: Using smaller but specialized teacher models (reward models) can incrementally align LLMs towards specific axes. We call this **Programmatic Alignment** - capturing domain knowledge in programmatic forms that can be used to guide LLM improvement.
25
 
26
- #### Contemporary Work and Acknowledgements:
27
-
28
-
29
- #### Applications:
30
  Unlike our customers, who have very specific use cases to align LLMs to,
31
  the AlpacaEval 2.0 leaderboard measures the ability of LLMS to follow general user instructions.
32
  Thus, for this demonstration, we use a general-purpose reward model - the performant [PairRM model](https://huggingface.co/llm-blender/PairRM).
@@ -39,7 +38,7 @@ that reflect your enterprises' needs**, please contact the Snorkel AI team or co
39
  [**Enterprise LLM Summit: Building GenAI with Your Data on January 25, 2024**](https://snorkel.ai/event/enterprise-llm-summit/)
40
  to learn more about "Programmatically scaling human preferences and alignment in GenAI".
41
 
42
- #### Result:
43
  On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/):
44
  - The base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) scored **14.72**.
45
  After applying the above methodology:
@@ -51,13 +50,13 @@ We recognize that the Alpaca-Eval 2.0 benchmark does not entirely capture the fu
51
  However, in our current work, where the goal is to align with general "human preferences," Alpaca-Eval 2.0 serves as a suitable and representative benchmark.
52
  Moving forward, we anticipate further contributions from the community regarding new alignment axes, and conduct evaluations using other appropriate benchmarks.
53
 
54
- ## Limitations:
55
  The model is a quick demonstration that the LLMs can be programmatically aligned using smaller specialized reward models.
56
  It does not have any moderation mechanisms.
57
  We look forward to continuing to engage with the research community and our customers exploring optimal methods for gettings models to respect guardrails,
58
  allowing for deployment in environments requiring moderated outputs.
59
 
60
- ## Acknowledgments:
61
  - The Mistral AI Team for developing and releasing the advanced Mistral-7B-Instruct-v0.2 model.
62
  - The author of the [Direct Preference Optimization paper](https://arxiv.org/abs/2305.18290) for the innovative approach
63
  - The author of the [Pairwise Reward Model for LLMs paper](https://arxiv.org/abs/2306.02561) for the powerful general-purpose reward model
@@ -67,5 +66,5 @@ which proposes a similar general approach for creating alignment pairs from a la
67
  While this may work for general-purpose models, our experience has shown that task-specific reward models guided by SMEs are necessary for most
68
  enterprise applications of LLMs for specific use cases, which is why we focus on the use of external reward models.
69
 
70
- ## The Snorkel AI Team
71
  Hoang Tran, Chris Glaze, Braden Hancock
 
1
  ---
2
  license: apache-2.0
3
  datasets:
4
+ - snorkelai/Snorkel-Mistral-Self-Improvement
5
  ---
6
 
7
  Original post: [Snorkel link]
8
 
9
+ ### Dataset:
10
+ Training dataset: [snorkelai/Snorkel-Mistral-Self-Improvement](link)
11
 
12
+ We utilize ONLY the prompts from [UltraFeedback](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized); **no external LLM responses used**.
13
+
14
+ ### Methodology:
15
  1. Generate five response variations for each prompt from a subset of 20,000 using the LLM - to start, we used [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2).
16
  2. Apply [PairRM](https://huggingface.co/llm-blender/PairRM) for response reranking.
17
  3. Update the LLM by applying Direct Preference Optimization (DPO) on the top (chosen) and bottom (rejected) responses.
 
20
  This overview provides a high-level summary of our approach.
21
  We plan to release more detailed results and findings in the coming weeks on the [Snorkel blog](https://snorkel.ai/blog/).
22
 
23
+ ### Key Premises:
24
  - **Specialization Requirement**: For most enterprise use cases, using LLMs "off-the-shelf" falls short of production quality, necessitating additional fine-tuning and alignment.
25
  - **Ease of Model Building**: Creating ranking/scoring/classification models is simpler than developing high-quality, manually annotated datasets for long-form responses.
26
  - **Programmatic Alignment**: Using smaller but specialized teacher models (reward models) can incrementally align LLMs towards specific axes. We call this **Programmatic Alignment** - capturing domain knowledge in programmatic forms that can be used to guide LLM improvement.
27
 
28
+ ### Applications:
 
 
 
29
  Unlike our customers, who have very specific use cases to align LLMs to,
30
  the AlpacaEval 2.0 leaderboard measures the ability of LLMS to follow general user instructions.
31
  Thus, for this demonstration, we use a general-purpose reward model - the performant [PairRM model](https://huggingface.co/llm-blender/PairRM).
 
38
  [**Enterprise LLM Summit: Building GenAI with Your Data on January 25, 2024**](https://snorkel.ai/event/enterprise-llm-summit/)
39
  to learn more about "Programmatically scaling human preferences and alignment in GenAI".
40
 
41
+ ### Result:
42
  On [**Alpaca-Eval 2.0**](https://tatsu-lab.github.io/alpaca_eval/):
43
  - The base model: [Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) scored **14.72**.
44
  After applying the above methodology:
 
50
  However, in our current work, where the goal is to align with general "human preferences," Alpaca-Eval 2.0 serves as a suitable and representative benchmark.
51
  Moving forward, we anticipate further contributions from the community regarding new alignment axes, and conduct evaluations using other appropriate benchmarks.
52
 
53
+ ### Limitations:
54
  The model is a quick demonstration that the LLMs can be programmatically aligned using smaller specialized reward models.
55
  It does not have any moderation mechanisms.
56
  We look forward to continuing to engage with the research community and our customers exploring optimal methods for gettings models to respect guardrails,
57
  allowing for deployment in environments requiring moderated outputs.
58
 
59
+ ### Contemporary Work and Acknowledgements:
60
  - The Mistral AI Team for developing and releasing the advanced Mistral-7B-Instruct-v0.2 model.
61
  - The author of the [Direct Preference Optimization paper](https://arxiv.org/abs/2305.18290) for the innovative approach
62
  - The author of the [Pairwise Reward Model for LLMs paper](https://arxiv.org/abs/2306.02561) for the powerful general-purpose reward model
 
66
  While this may work for general-purpose models, our experience has shown that task-specific reward models guided by SMEs are necessary for most
67
  enterprise applications of LLMs for specific use cases, which is why we focus on the use of external reward models.
68
 
69
+ ### The Snorkel AI Team
70
  Hoang Tran, Chris Glaze, Braden Hancock