Upload README.md
Browse files
README.md
CHANGED
@@ -7,66 +7,84 @@ datasets:
|
|
7 |
inference: false
|
8 |
language:
|
9 |
- en
|
10 |
-
license:
|
11 |
model_creator: Stability AI
|
12 |
model_link: https://huggingface.co/stabilityai/StableBeluga2
|
13 |
-
model_name:
|
14 |
model_type: llama
|
15 |
pipeline_tag: text-generation
|
16 |
quantized_by: TheBloke
|
17 |
---
|
18 |
|
19 |
<!-- header start -->
|
20 |
-
|
21 |
-
|
|
|
22 |
</div>
|
23 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
24 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
25 |
-
<p><a href="https://discord.gg/theblokeai">Chat & support:
|
26 |
</div>
|
27 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
28 |
-
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
29 |
</div>
|
30 |
</div>
|
|
|
|
|
31 |
<!-- header end -->
|
32 |
|
33 |
-
#
|
34 |
- Model creator: [Stability AI](https://huggingface.co/stabilityai)
|
35 |
-
- Original model: [
|
36 |
|
37 |
## Description
|
38 |
|
39 |
-
This repo contains GGML format model files for [Stability AI's
|
40 |
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), version 1.37 and later. A powerful GGML web UI, especially good for story telling.
|
|
|
46 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), version 0.1.77 and later. A Python library with LangChain support, and OpenAI-compatible API server.
|
|
|
47 |
|
48 |
## Repositories available
|
49 |
|
50 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StableBeluga2-70B-GPTQ)
|
51 |
-
* [2, 3, 4, 5, 6 and 8-bit
|
|
|
52 |
* [Stability AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/stabilityai/StableBeluga2)
|
53 |
|
54 |
## Prompt template: Orca-Hashes
|
55 |
|
56 |
```
|
57 |
### System:
|
58 |
-
|
59 |
|
60 |
### User:
|
61 |
{prompt}
|
62 |
|
63 |
### Assistant:
|
|
|
64 |
```
|
65 |
|
66 |
<!-- compatibility_ggml start -->
|
67 |
## Compatibility
|
68 |
|
69 |
-
###
|
|
|
|
|
|
|
|
|
70 |
|
71 |
Or one of the other tools and libraries listed above.
|
72 |
|
@@ -94,66 +112,49 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
94 |
|
95 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
96 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
97 |
-
| [stablebeluga2-70b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/StableBeluga2-GGML/blob/main/stablebeluga2-70b.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
98 |
-
| [stablebeluga2-70b.ggmlv3.
|
99 |
-
| [stablebeluga2-70b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/StableBeluga2-GGML/blob/main/stablebeluga2-70b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
100 |
-
| [stablebeluga2-70b.ggmlv3.
|
101 |
-
| [stablebeluga2-70b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/StableBeluga2-GGML/blob/main/stablebeluga2-70b.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
|
102 |
-
| [stablebeluga2-70b.ggmlv3.
|
103 |
-
| [stablebeluga2-70b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/StableBeluga2-GGML/blob/main/stablebeluga2-70b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
104 |
-
| [stablebeluga2-70b.ggmlv3.
|
105 |
-
| [stablebeluga2-70b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/StableBeluga2-GGML/blob/main/stablebeluga2-70b.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
106 |
-
| [stablebeluga2-70b.ggmlv3.
|
107 |
-
| [stablebeluga2-70b.ggmlv3.
|
108 |
-
|
109 |
-
|
110 |
-
| stablebeluga2-70b.ggmlv3.q8_0.bin | q8_0 | 8 | 73.23 GB | 75.73 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
|
111 |
-
|
112 |
-
### q5_1, q6_K and q8_0 files require expansion from archive
|
113 |
-
|
114 |
-
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts.
|
115 |
-
|
116 |
-
### q5_1
|
117 |
-
Please download:
|
118 |
-
* `stablebeluga2-70b.ggmlv3.q5_1.zip`
|
119 |
-
* `stablebeluga2-70b.ggmlv3.q5_1.z01`
|
120 |
-
|
121 |
-
### q6_K
|
122 |
-
Please download:
|
123 |
-
* `stablebeluga2-70b.ggmlv3.q6_K.zip`
|
124 |
-
* `stablebeluga2-70b.ggmlv3.q6_K.z01`
|
125 |
-
|
126 |
-
### q8_0
|
127 |
-
Please download:
|
128 |
-
* `stablebeluga2-70b.ggmlv3.q8_0.zip`
|
129 |
-
* `stablebeluga2-70b.ggmlv3.q8_0.z01`
|
130 |
-
|
131 |
-
Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
|
132 |
-
```
|
133 |
-
sudo apt update -y && sudo apt install 7zip
|
134 |
-
7zz x stablebeluga2-70b.ggmlv3.q6_K.zip
|
135 |
-
```
|
136 |
-
|
137 |
-
Once the `.bin` is extracted you can delete the `.zip` and `.z01` files.
|
138 |
|
139 |
## How to run in `llama.cpp`
|
140 |
|
|
|
|
|
|
|
|
|
141 |
I use the following command line; adjust for your tastes and needs:
|
142 |
|
143 |
```
|
144 |
-
./main -t 10 -gqa 8 -m stablebeluga2.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\
|
145 |
```
|
146 |
-
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
|
147 |
|
148 |
-
|
149 |
|
150 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
151 |
|
|
|
|
|
|
|
|
|
|
|
|
|
152 |
## How to run in `text-generation-webui`
|
153 |
|
154 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
155 |
|
156 |
<!-- footer start -->
|
|
|
157 |
## Discord
|
158 |
|
159 |
For further support, and discussions on these models and AI in general, join us at:
|
@@ -173,19 +174,23 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
173 |
* Patreon: https://patreon.com/TheBlokeAI
|
174 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
175 |
|
176 |
-
**Special thanks to**:
|
177 |
|
178 |
-
**Patreon special mentions**:
|
179 |
|
180 |
|
181 |
Thank you to all my generous patrons and donaters!
|
182 |
|
|
|
|
|
183 |
<!-- footer end -->
|
184 |
|
185 |
-
# Original model card: Stability AI's
|
186 |
|
187 |
# Stable Beluga 2
|
188 |
|
|
|
|
|
189 |
## Model Description
|
190 |
|
191 |
`Stable Beluga 2` is a Llama2 70B model finetuned on an Orca style Dataset
|
@@ -222,6 +227,12 @@ Your prompt here
|
|
222 |
The output of Stable Beluga 2
|
223 |
```
|
224 |
|
|
|
|
|
|
|
|
|
|
|
|
|
225 |
## Model Details
|
226 |
|
227 |
* **Developed by**: [Stability AI](https://stability.ai/)
|
@@ -248,6 +259,16 @@ Models are learned via supervised fine-tuning on the aforementioned datasets, tr
|
|
248 |
|
249 |
Beluga is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Beluga's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Beluga, developers should perform safety testing and tuning tailored to their specific applications of the model.
|
250 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
251 |
## Citations
|
252 |
|
253 |
```bibtext
|
|
|
7 |
inference: false
|
8 |
language:
|
9 |
- en
|
10 |
+
license: llama2
|
11 |
model_creator: Stability AI
|
12 |
model_link: https://huggingface.co/stabilityai/StableBeluga2
|
13 |
+
model_name: StableBeluga2
|
14 |
model_type: llama
|
15 |
pipeline_tag: text-generation
|
16 |
quantized_by: TheBloke
|
17 |
---
|
18 |
|
19 |
<!-- header start -->
|
20 |
+
<!-- 200823 -->
|
21 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
22 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
23 |
</div>
|
24 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
25 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
26 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
|
27 |
</div>
|
28 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
29 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
30 |
</div>
|
31 |
</div>
|
32 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
33 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
34 |
<!-- header end -->
|
35 |
|
36 |
+
# StableBeluga2 - GGML
|
37 |
- Model creator: [Stability AI](https://huggingface.co/stabilityai)
|
38 |
+
- Original model: [StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2)
|
39 |
|
40 |
## Description
|
41 |
|
42 |
+
This repo contains GGML format model files for [Stability AI's StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2).
|
43 |
|
44 |
+
### Important note regarding GGML files.
|
45 |
+
|
46 |
+
The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
|
47 |
+
|
48 |
+
Please use the GGUF models instead.
|
49 |
+
|
50 |
+
### About GGML
|
51 |
+
|
52 |
+
GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
|
53 |
+
* [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
|
54 |
+
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
|
55 |
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), version 1.37 and later. A powerful GGML web UI, especially good for story telling.
|
56 |
+
* [LM Studio](https://lmstudio.ai/), a fully featured local GUI with GPU acceleration for both Windows and macOS. Use 0.1.11 or later for macOS GPU acceleration with 70B models.
|
57 |
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), version 0.1.77 and later. A Python library with LangChain support, and OpenAI-compatible API server.
|
58 |
+
* [ctransformers](https://github.com/marella/ctransformers), version 0.2.15 and later. A Python library with LangChain support, and OpenAI-compatible API server.
|
59 |
|
60 |
## Repositories available
|
61 |
|
62 |
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/StableBeluga2-70B-GPTQ)
|
63 |
+
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/StableBeluga2-70B-GGUF)
|
64 |
+
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML)
|
65 |
* [Stability AI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/stabilityai/StableBeluga2)
|
66 |
|
67 |
## Prompt template: Orca-Hashes
|
68 |
|
69 |
```
|
70 |
### System:
|
71 |
+
{system_message}
|
72 |
|
73 |
### User:
|
74 |
{prompt}
|
75 |
|
76 |
### Assistant:
|
77 |
+
|
78 |
```
|
79 |
|
80 |
<!-- compatibility_ggml start -->
|
81 |
## Compatibility
|
82 |
|
83 |
+
### Works with llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) until August 21st, 2023
|
84 |
+
|
85 |
+
Will not work with `llama.cpp` after commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa).
|
86 |
+
|
87 |
+
For compatibility with latest llama.cpp, please use GGUF files instead.
|
88 |
|
89 |
Or one of the other tools and libraries listed above.
|
90 |
|
|
|
112 |
|
113 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
114 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
115 |
+
| [stablebeluga2-70b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML/blob/main/stablebeluga2-70b.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
|
116 |
+
| [stablebeluga2-70b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML/blob/main/stablebeluga2-70b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
|
117 |
+
| [stablebeluga2-70b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML/blob/main/stablebeluga2-70b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
118 |
+
| [stablebeluga2-70b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML/blob/main/stablebeluga2-70b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
|
119 |
+
| [stablebeluga2-70b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML/blob/main/stablebeluga2-70b.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
|
120 |
+
| [stablebeluga2-70b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML/blob/main/stablebeluga2-70b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
|
121 |
+
| [stablebeluga2-70b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML/blob/main/stablebeluga2-70b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
|
122 |
+
| [stablebeluga2-70b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML/blob/main/stablebeluga2-70b.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
123 |
+
| [stablebeluga2-70b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML/blob/main/stablebeluga2-70b.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
124 |
+
| [stablebeluga2-70b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML/blob/main/stablebeluga2-70b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
|
125 |
+
| [stablebeluga2-70b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/StableBeluga2-70B-GGML/blob/main/stablebeluga2-70b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
|
126 |
+
|
127 |
+
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
128 |
|
129 |
## How to run in `llama.cpp`
|
130 |
|
131 |
+
Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
|
132 |
+
|
133 |
+
For compatibility with latest llama.cpp, please use GGUF files instead.
|
134 |
+
|
135 |
I use the following command line; adjust for your tastes and needs:
|
136 |
|
137 |
```
|
138 |
+
./main -t 10 -ngl 40 -gqa 8 -m stablebeluga2-70B.ggmlv3.q4_K_M.bin --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### System:\nYou are a story writing assistant.\n\n### User:\nWrite a story about llamas\n\n### Assistant:"
|
139 |
```
|
140 |
+
Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If you are fully offloading the model to GPU, use `-t 1`
|
141 |
|
142 |
+
Change `-ngl 40` to the number of GPU layers you have VRAM for. Use `-ngl 100` to offload all layers to VRAM - if you have a 48GB card, or 2 x 24GB, or similar. Otherwise you can partially offload as many as you have VRAM for, on one or more GPUs.
|
143 |
|
144 |
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
|
145 |
|
146 |
+
Remember the `-gqa 8` argument, required for Llama 70B models.
|
147 |
+
|
148 |
+
Change `-c 4096` to the desired sequence length for this model. For models that use RoPE, add `--rope-freq-base 10000 --rope-freq-scale 0.5` for doubled context, or `--rope-freq-base 10000 --rope-freq-scale 0.25` for 4x context.
|
149 |
+
|
150 |
+
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
|
151 |
+
|
152 |
## How to run in `text-generation-webui`
|
153 |
|
154 |
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
|
155 |
|
156 |
<!-- footer start -->
|
157 |
+
<!-- 200823 -->
|
158 |
## Discord
|
159 |
|
160 |
For further support, and discussions on these models and AI in general, join us at:
|
|
|
174 |
* Patreon: https://patreon.com/TheBlokeAI
|
175 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
176 |
|
177 |
+
**Special thanks to**: Aemon Algiz.
|
178 |
|
179 |
+
**Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
|
180 |
|
181 |
|
182 |
Thank you to all my generous patrons and donaters!
|
183 |
|
184 |
+
And thank you again to a16z for their generous grant.
|
185 |
+
|
186 |
<!-- footer end -->
|
187 |
|
188 |
+
# Original model card: Stability AI's StableBeluga2
|
189 |
|
190 |
# Stable Beluga 2
|
191 |
|
192 |
+
Use [Stable Chat (Research Preview)](https://chat.stability.ai/chat) to test Stability AI's best language models for free
|
193 |
+
|
194 |
## Model Description
|
195 |
|
196 |
`Stable Beluga 2` is a Llama2 70B model finetuned on an Orca style Dataset
|
|
|
227 |
The output of Stable Beluga 2
|
228 |
```
|
229 |
|
230 |
+
## Other Beluga Models
|
231 |
+
|
232 |
+
[StableBeluga 1 - Delta](https://huggingface.co/stabilityai/StableBeluga1-Delta)
|
233 |
+
[StableBeluga 13B](https://huggingface.co/stabilityai/StableBeluga-13B)
|
234 |
+
[StableBeluga 7B](https://huggingface.co/stabilityai/StableBeluga-7B)
|
235 |
+
|
236 |
## Model Details
|
237 |
|
238 |
* **Developed by**: [Stability AI](https://stability.ai/)
|
|
|
259 |
|
260 |
Beluga is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Beluga's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Beluga, developers should perform safety testing and tuning tailored to their specific applications of the model.
|
261 |
|
262 |
+
## How to cite
|
263 |
+
|
264 |
+
```bibtex
|
265 |
+
@misc{StableBelugaModels,
|
266 |
+
url={[https://huggingface.co/stabilityai/StableBeluga2](https://huggingface.co/stabilityai/StableBeluga2)},
|
267 |
+
title={Stable Beluga models},
|
268 |
+
author={Mahan, Dakota and Carlow, Ryan and Castricato, Louis and Cooper, Nathan and Laforte, Christian}
|
269 |
+
}
|
270 |
+
```
|
271 |
+
|
272 |
## Citations
|
273 |
|
274 |
```bibtext
|