Text Generation
Transformers
PyTorch
English
llama
causal-lm
text-generation-inference
Inference Endpoints
TheBloke commited on
Commit
73a38bc
1 Parent(s): 142b883

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +163 -0
README.md CHANGED
@@ -1,3 +1,166 @@
1
  ---
2
  license: other
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: other
3
  ---
4
+ # StableVicuna-13B
5
+
6
+ This is an HF format unquantised model of [CarterAI's StableVicuna 13B](https://huggingface.co/CarperAI/stable-vicuna-13b-delta).
7
+
8
+ It is the result of merging the deltas from the above repository with the original Llama 13B weights.
9
+
10
+ # Original StableVicuna-13B model card
11
+
12
+ ## Model Description
13
+
14
+ StableVicuna-13B is a [Vicuna-13B v0](https://huggingface.co/lmsys/vicuna-13b-delta-v0) model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets.
15
+
16
+ ## Model Details
17
+
18
+ * **Trained by**: [Duy Phung](https://github.com/PhungVanDuy) of [CarperAI](https://carper.ai)
19
+ * **Model type:** **StableVicuna-13B** is an auto-regressive language model based on the LLaMA transformer architecture.
20
+ * **Language(s)**: English
21
+ * **Library**: [trlX](https://github.com/CarperAI/trlx)
22
+ * **License for delta weights**: [CC-BY-NC-SA-4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/)
23
+ * *Note*: License for the base LLaMA model's weights is Meta's [non-commercial bespoke license](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md).
24
+ * **Contact**: For questions and comments about the model, visit the [CarperAI](https://discord.com/invite/KgfkCVYHdu) and [StableFoundation](https://discord.gg/stablediffusion) Discord servers.
25
+
26
+ | Hyperparameter | Value |
27
+ |---------------------------|-------|
28
+ | \\(n_\text{parameters}\\) | 13B |
29
+ | \\(d_\text{model}\\) | 5120 |
30
+ | \\(n_\text{layers}\\) | 40 |
31
+ | \\(n_\text{heads}\\) | 40 |
32
+
33
+ ## Training
34
+
35
+ ### Training Dataset
36
+
37
+ StableVicuna-13B is fine-tuned on a mix of three datasets. [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages;
38
+ [GPT4All Prompt Generations](https://huggingface.co/datasets/nomic-ai/gpt4all_prompt_generations), a dataset of 400k prompts and responses generated by GPT-4; and [Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca), a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine.
39
+
40
+ The reward model used during RLHF was also trained on [OpenAssistant Conversations Dataset (OASST1)](https://huggingface.co/datasets/OpenAssistant/oasst1) along with two other datasets: [Anthropic HH-RLHF](https://huggingface.co/datasets/Anthropic/hh-rlhf), a dataset of preferences about AI assistant helpfulness and harmlessness; and [Stanford Human Preferences Dataset](https://huggingface.co/datasets/stanfordnlp/SHP) a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
41
+
42
+ ### Training Procedure
43
+
44
+ `CarperAI/stable-vicuna-13b-delta` was trained using PPO as implemented in [`trlX`](https://github.com/CarperAI/trlx/blob/main/trlx/trainer/accelerate_ppo_trainer.py) with the following configuration:
45
+
46
+ | Hyperparameter | Value |
47
+ |-------------------|---------|
48
+ | num_rollouts | 128 |
49
+ | chunk_size | 16 |
50
+ | ppo_epochs | 4 |
51
+ | init_kl_coef | 0.1 |
52
+ | target | 6 |
53
+ | horizon | 10000 |
54
+ | gamma | 1 |
55
+ | lam | 0.95 |
56
+ | cliprange | 0.2 |
57
+ | cliprange_value | 0.2 |
58
+ | vf_coef | 1.0 |
59
+ | scale_reward | None |
60
+ | cliprange_reward | 10 |
61
+ | generation_kwargs | |
62
+ | max_length | 512 |
63
+ | min_length | 48 |
64
+ | top_k | 0.0 |
65
+ | top_p | 1.0 |
66
+ | do_sample | True |
67
+ | temperature | 1.0 |
68
+
69
+ ## Use and Limitations
70
+
71
+ ### Intended Use
72
+
73
+ This model is intended to be used for text generation with a focus on conversational tasks. Users may further fine-tune the model on their own data to improve the model's performance on their specific tasks in accordance with the non-commercial [license](https://creativecommons.org/licenses/by-nc/4.0/).
74
+
75
+ ### Limitations and bias
76
+
77
+ The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA [paper](https://arxiv.org/abs/2302.13971). We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
78
+
79
+ ## Acknowledgements
80
+
81
+ This work would not have been possible without the support of [Stability AI](https://stability.ai/).
82
+
83
+ ## Citations
84
+
85
+ ```bibtex
86
+ @article{touvron2023llama,
87
+ title={LLaMA: Open and Efficient Foundation Language Models},
88
+ author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
89
+ journal={arXiv preprint arXiv:2302.13971},
90
+ year={2023}
91
+ }
92
+ ```
93
+
94
+ ```bibtex
95
+ @misc{vicuna2023,
96
+ title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
97
+ url = {https://vicuna.lmsys.org},
98
+ author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
99
+ month = {March},
100
+ year = {2023}
101
+ }
102
+ ```
103
+
104
+ ```bibtex
105
+ @misc{gpt4all,
106
+ author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
107
+ title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
108
+ year = {2023},
109
+ publisher = {GitHub},
110
+ journal = {GitHub repository},
111
+ howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
112
+ }
113
+ ```
114
+
115
+ ```bibtex
116
+ @misc{alpaca,
117
+ author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
118
+ title = {Stanford Alpaca: An Instruction-following LLaMA model},
119
+ year = {2023},
120
+ publisher = {GitHub},
121
+ journal = {GitHub repository},
122
+ howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
123
+ }
124
+ ```
125
+
126
+ ```bibtex
127
+ @software{leandro_von_werra_2023_7790115,
128
+ author = {Leandro von Werra and
129
+ Alex Havrilla and
130
+ Max reciprocated and
131
+ Jonathan Tow and
132
+ Aman cat-state and
133
+ Duy V. Phung and
134
+ Louis Castricato and
135
+ Shahbuland Matiana and
136
+ Alan and
137
+ Ayush Thakur and
138
+ Alexey Bukhtiyarov and
139
+ aaronrmm and
140
+ Fabrizio Milo and
141
+ Daniel and
142
+ Daniel King and
143
+ Dong Shin and
144
+ Ethan Kim and
145
+ Justin Wei and
146
+ Manuel Romero and
147
+ Nicky Pochinkov and
148
+ Omar Sanseviero and
149
+ Reshinth Adithyan and
150
+ Sherman Siu and
151
+ Thomas Simonini and
152
+ Vladimir Blagojevic and
153
+ Xu Song and
154
+ Zack Witten and
155
+ alexandremuzio and
156
+ crumb},
157
+ title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
158
+ Util, T5 ILQL, Tests}},
159
+ month = mar,
160
+ year = 2023,
161
+ publisher = {Zenodo},
162
+ version = {v0.6.0},
163
+ doi = {10.5281/zenodo.7790115},
164
+ url = {https://doi.org/10.5281/zenodo.7790115}
165
+ }
166
+ ```