Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
license: cc-by-nc-
|
3 |
datasets:
|
4 |
- berkeley-nest/Nectar
|
5 |
language:
|
@@ -10,200 +10,77 @@ tags:
|
|
10 |
- RLHF
|
11 |
- RLAIF
|
12 |
---
|
13 |
-
|
14 |
-
# Model Card for Starling-LM-7B-alpha
|
15 |
|
16 |
<!-- Provide a quick summary of what the model is/does. -->
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
<!-- Provide a longer summary of what this model is. -->
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
- **Developed by:** [More Information Needed]
|
29 |
-
- **Funded by [optional]:** [More Information Needed]
|
30 |
-
- **Shared by [optional]:** [More Information Needed]
|
31 |
-
- **Model type:** [More Information Needed]
|
32 |
-
- **Language(s) (NLP):** [More Information Needed]
|
33 |
-
- **License:** [More Information Needed]
|
34 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
35 |
-
|
36 |
-
### Model Sources [optional]
|
37 |
-
|
38 |
-
<!-- Provide the basic links for the model. -->
|
39 |
-
|
40 |
-
- **Repository:** [More Information Needed]
|
41 |
-
- **Paper [optional]:** [More Information Needed]
|
42 |
-
- **Demo [optional]:** [More Information Needed]
|
43 |
-
|
44 |
-
## Uses
|
45 |
-
|
46 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
47 |
-
|
48 |
-
### Direct Use
|
49 |
-
|
50 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
51 |
-
|
52 |
-
[More Information Needed]
|
53 |
-
|
54 |
-
### Downstream Use [optional]
|
55 |
-
|
56 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
57 |
-
|
58 |
-
[More Information Needed]
|
59 |
-
|
60 |
-
### Out-of-Scope Use
|
61 |
-
|
62 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
63 |
-
|
64 |
-
[More Information Needed]
|
65 |
-
|
66 |
-
## Bias, Risks, and Limitations
|
67 |
-
|
68 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
69 |
-
|
70 |
-
[More Information Needed]
|
71 |
-
|
72 |
-
### Recommendations
|
73 |
-
|
74 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
75 |
-
|
76 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
77 |
-
|
78 |
-
## How to Get Started with the Model
|
79 |
-
|
80 |
-
Use the code below to get started with the model.
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
## Training Details
|
85 |
-
|
86 |
-
### Training Data
|
87 |
-
|
88 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
### Training Procedure
|
93 |
-
|
94 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
95 |
-
|
96 |
-
#### Preprocessing [optional]
|
97 |
|
98 |
-
[More Information Needed]
|
99 |
|
|
|
100 |
|
101 |
-
|
102 |
|
103 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
104 |
|
105 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
106 |
|
107 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
108 |
|
109 |
-
[More Information Needed]
|
110 |
|
111 |
-
|
|
|
|
|
|
|
|
|
112 |
|
113 |
-
|
114 |
-
|
115 |
-
### Testing Data, Factors & Metrics
|
116 |
-
|
117 |
-
#### Testing Data
|
118 |
-
|
119 |
-
<!-- This should link to a Dataset Card if possible. -->
|
120 |
-
|
121 |
-
[More Information Needed]
|
122 |
-
|
123 |
-
#### Factors
|
124 |
-
|
125 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
126 |
-
|
127 |
-
[More Information Needed]
|
128 |
-
|
129 |
-
#### Metrics
|
130 |
-
|
131 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
132 |
-
|
133 |
-
[More Information Needed]
|
134 |
-
|
135 |
-
### Results
|
136 |
-
|
137 |
-
[More Information Needed]
|
138 |
-
|
139 |
-
#### Summary
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
## Model Examination [optional]
|
144 |
-
|
145 |
-
<!-- Relevant interpretability work for the model goes here -->
|
146 |
-
|
147 |
-
[More Information Needed]
|
148 |
-
|
149 |
-
## Environmental Impact
|
150 |
-
|
151 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
152 |
-
|
153 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
154 |
-
|
155 |
-
- **Hardware Type:** [More Information Needed]
|
156 |
-
- **Hours used:** [More Information Needed]
|
157 |
-
- **Cloud Provider:** [More Information Needed]
|
158 |
-
- **Compute Region:** [More Information Needed]
|
159 |
-
- **Carbon Emitted:** [More Information Needed]
|
160 |
-
|
161 |
-
## Technical Specifications [optional]
|
162 |
-
|
163 |
-
### Model Architecture and Objective
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
### Compute Infrastructure
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
#### Hardware
|
172 |
-
|
173 |
-
[More Information Needed]
|
174 |
-
|
175 |
-
#### Software
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
## Citation [optional]
|
180 |
-
|
181 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
182 |
-
|
183 |
-
**BibTeX:**
|
184 |
-
|
185 |
-
[More Information Needed]
|
186 |
-
|
187 |
-
**APA:**
|
188 |
-
|
189 |
-
[More Information Needed]
|
190 |
|
191 |
-
|
|
|
|
|
|
|
192 |
|
193 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
194 |
|
195 |
-
[More Information Needed]
|
196 |
|
197 |
-
##
|
198 |
|
199 |
-
|
|
|
|
|
200 |
|
201 |
-
## Model Card Authors [optional]
|
202 |
|
203 |
-
[More Information Needed]
|
204 |
|
205 |
-
##
|
|
|
206 |
|
207 |
-
[More Information Needed]
|
208 |
|
|
|
|
|
209 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: cc-by-nc-4.0
|
3 |
datasets:
|
4 |
- berkeley-nest/Nectar
|
5 |
language:
|
|
|
10 |
- RLHF
|
11 |
- RLAIF
|
12 |
---
|
13 |
+
# Starling-RM-7B-alpha
|
|
|
14 |
|
15 |
<!-- Provide a quick summary of what the model is/does. -->
|
16 |
|
17 |
+
- **Developed by:** Banghua Zhu * , Evan Frick * , Tianhao Wu * , Hanlin Zhu and Jiantao Jiao.
|
18 |
+
- **Model type:** Reward Model for RLHF
|
19 |
+
- **License:** Non commercial license
|
20 |
+
- **Finetuned from model:** [Llama2-7B-Chat](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
|
21 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
|
|
23 |
|
24 |
+
We introduce Starling-7B, an open large language model (LLM) trained by Reinforcement Learning from AI Feedback (RLAIF). The model harnesses the power of our new GPT-4 labeled ranking dataset, Nectar, and our new reward training and policy tuning pipeline. Starling-7B-alpha scores 8.09 in MT Bench with GPT-4 as a judge, outperforming every model to date on MT-Bench except for OpenAI's GPT-4 and GPT-4 Turbo. We release the ranking dataset [Nectar](https://huggingface.co/berkeley-nest/nector), the reward model [Starling-RM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-RM-7B-alpha) and the language model [Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha) on HuggingFace, and an online demo in LMSYS [Chatbot Arena](https://chat.lmsys.org). Stay tuned for our forthcoming code and paper, which will provide more details on the whole process.
|
25 |
|
26 |
+
Starling-LM-7B-alpha is a language model trained from [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5)
|
27 |
|
|
|
28 |
|
29 |
+
| Model | Tuning Method | MT Bench | AlpacaEval | MMLU |
|
30 |
+
|-----------------------|------------------|----------|------------|------|
|
31 |
+
| GPT-4-Turbo | ? | 9.32 | 97.70 | |
|
32 |
+
| GPT-4 | SFT + PPO | 8.99 | 95.28 | 86.4 |
|
33 |
+
| Starling-7B | C-RLFT + APA | 8.09 | 91.99 | 63.9 |
|
34 |
+
| Claude-2 | ? | 8.06 | 91.36 | 78.5 |
|
35 |
+
| GPT-3.5-Turbo | ? | 7.94 | 89.37 | 70 |
|
36 |
+
| Claude-1 | ? | 7.9 | 88.39 | 77 |
|
37 |
+
| Tulu-2-dpo-70b | SFT + DPO | 7.89 | 95.1 | |
|
38 |
+
| Openchat-3.5 | C-RLFT | 7.81 | 88.51 | 64.3 |
|
39 |
+
| Zephyr-7B-beta | SFT + DPO | 7.34 | 90.60 | 61.4 |
|
40 |
+
| Llama-2-70b-chat-hf | SFT + PPO | 6.86 | 92.66 | 63 |
|
41 |
+
| Neural-chat-7b-v3-1 | SFT + DPO | 6.84 | 84.53 | 62.4 |
|
42 |
+
| Tulu-2-dpo-7b | SFT + DPO | 6.29 | 85.1 | |
|
43 |
|
|
|
44 |
|
|
|
45 |
|
46 |
+
ollowing the method of training reward model in [the instructGPT paper](https://arxiv.org/abs/2203.02155), we remove the last layer of Llama2-7B Chat,
|
47 |
+
and concatenate a linear layer that outputs scalar for any pair of input prompt and response. We train the reward model with preference dataset [berkeley-nest/Nectar](https://huggingface.co/berkeley-nest),
|
48 |
+
with the K-wise maximum likelihood estimator proposed in [this paper](https://arxiv.org/abs/2301.11270). The reward model outputs a scalar for any given prompt and response. A response that is more helpful and
|
49 |
+
less harmful will get the highest reward score. Note that since the preference dataset [berkeley-nest/Nectar](https://huggingface.co/berkeley-nest) is based on GPT-4 preference, the reward model is likely to be biased
|
50 |
+
towards GPT-4's own preference, including longer responses and certain response format.
|
51 |
|
52 |
+
For more detailed discussions, please check out our [blog post](https://starling.cs.berkeley.edu), and stay tuned for our upcoming code and paper!
|
53 |
+
<!-- Provide the basic links for the model. -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
|
55 |
+
- **Blog:** https://starling.cs.berkeley.edu/
|
56 |
+
- **Paper:** Coming soon!
|
57 |
+
- **Code:** Coming soon!
|
58 |
+
-
|
59 |
|
|
|
60 |
|
|
|
61 |
|
62 |
+
## Uses
|
63 |
|
64 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
65 |
+
Our model follows the exact chat template and usage as [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5). Please refer to their model card for more details.
|
66 |
+
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
|
67 |
|
|
|
68 |
|
|
|
69 |
|
70 |
+
## License
|
71 |
+
The dataset, model and online demo is a research preview intended for non-commercial use only, subject to the data distillation [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation.
|
72 |
|
|
|
73 |
|
74 |
+
## Acknowledgment
|
75 |
+
We would like to thank Wei-Lin Chiang from Berkeley for detailed feedback of the blog and the projects. We would like to thank the [LMSYS Organization](https://lmsys.org/) for their support of [lmsys-chat-1M](https://huggingface.co/datasets/lmsys/lmsys-chat-1m) dataset, evaluation and online demo. We would like to thank the open source community for their efforts in providing the datasets and base models we used to develope the project, including but not limited to Anthropic, Llama, Mistral, Hugging Face H4, LMSYS, OpenChat, OpenBMB, Flan and ShareGPT.
|
76 |
|
77 |
+
## Citation
|
78 |
+
```
|
79 |
+
@misc{starling2023,
|
80 |
+
title = {Starling-7B: Improving LLM Helpfulness & Harmlessness with RLAIF},
|
81 |
+
url = {},
|
82 |
+
author = {Zhu, Banghua and Frick, Evan and Wu, Tianhao and Zhu, Hanlin and Jiao, Jiantao},
|
83 |
+
month = {November},
|
84 |
+
year = {2023}
|
85 |
+
}
|
86 |
+
```
|