Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ pipeline_tag: text-generation
|
|
9 |
prompt_template: '<s>[INST] {prompt} [/INST]
|
10 |
|
11 |
'
|
12 |
-
quantized_by:
|
13 |
tags:
|
14 |
- finetuned
|
15 |
---
|
@@ -18,17 +18,17 @@ tags:
|
|
18 |
<!-- header start -->
|
19 |
<!-- 200823 -->
|
20 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
21 |
-
<img src="https://i.imgur.com/EBdldam.jpg" alt="
|
22 |
</div>
|
23 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
24 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
25 |
-
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/
|
26 |
</div>
|
27 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
28 |
-
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/
|
29 |
</div>
|
30 |
</div>
|
31 |
-
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">
|
32 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
33 |
<!-- header end -->
|
34 |
|
@@ -66,9 +66,7 @@ Here is an incomplete list of clients and libraries that are known to support GG
|
|
66 |
<!-- repositories-available start -->
|
67 |
## Repositories available
|
68 |
|
69 |
-
* [
|
70 |
-
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ)
|
71 |
-
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF)
|
72 |
* [Mistral AI_'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
73 |
<!-- repositories-available end -->
|
74 |
|
@@ -112,18 +110,18 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
112 |
|
113 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
114 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
115 |
-
| [mistral-7b-instruct-v0.2.Q2_K.gguf](https://huggingface.co/
|
116 |
-
| [mistral-7b-instruct-v0.2.Q3_K_S.gguf](https://huggingface.co/
|
117 |
-
| [mistral-7b-instruct-v0.2.Q3_K_M.gguf](https://huggingface.co/
|
118 |
-
| [mistral-7b-instruct-v0.2.Q3_K_L.gguf](https://huggingface.co/
|
119 |
-
| [mistral-7b-instruct-v0.2.Q4_0.gguf](https://huggingface.co/
|
120 |
-
| [mistral-7b-instruct-v0.2.Q4_K_S.gguf](https://huggingface.co/
|
121 |
-
| [mistral-7b-instruct-v0.2.Q4_K_M.gguf](https://huggingface.co/
|
122 |
-
| [mistral-7b-instruct-v0.2.Q5_0.gguf](https://huggingface.co/
|
123 |
-
| [mistral-7b-instruct-v0.2.Q5_K_S.gguf](https://huggingface.co/
|
124 |
-
| [mistral-7b-instruct-v0.2.Q5_K_M.gguf](https://huggingface.co/
|
125 |
-
| [mistral-7b-instruct-v0.2.Q6_K.gguf](https://huggingface.co/
|
126 |
-
| [mistral-7b-instruct-v0.2.Q8_0.gguf](https://huggingface.co/
|
127 |
|
128 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
129 |
|
@@ -144,7 +142,7 @@ The following clients/libraries will automatically download models for you, prov
|
|
144 |
|
145 |
### In `text-generation-webui`
|
146 |
|
147 |
-
Under Download Model, you can enter the model repo:
|
148 |
|
149 |
Then click Download.
|
150 |
|
@@ -159,7 +157,7 @@ pip3 install huggingface-hub
|
|
159 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
160 |
|
161 |
```shell
|
162 |
-
huggingface-cli download
|
163 |
```
|
164 |
|
165 |
<details>
|
@@ -168,7 +166,7 @@ huggingface-cli download TheBloke/Mistral-7B-Instruct-v0.2-GGUF mistral-7b-instr
|
|
168 |
You can also download multiple files at once with a pattern:
|
169 |
|
170 |
```shell
|
171 |
-
huggingface-cli download
|
172 |
```
|
173 |
|
174 |
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
|
@@ -182,7 +180,7 @@ pip3 install hf_transfer
|
|
182 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
183 |
|
184 |
```shell
|
185 |
-
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download
|
186 |
```
|
187 |
|
188 |
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
|
@@ -291,7 +289,7 @@ Here are guides on using llama-cpp-python and ctransformers with LangChain:
|
|
291 |
|
292 |
For further support, and discussions on these models and AI in general, join us at:
|
293 |
|
294 |
-
[
|
295 |
|
296 |
## Thanks, and how to contribute
|
297 |
|
|
|
9 |
prompt_template: '<s>[INST] {prompt} [/INST]
|
10 |
|
11 |
'
|
12 |
+
quantized_by: TheBlock
|
13 |
tags:
|
14 |
- finetuned
|
15 |
---
|
|
|
18 |
<!-- header start -->
|
19 |
<!-- 200823 -->
|
20 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
21 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlockAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
22 |
</div>
|
23 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
24 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
25 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/TheBlockai">Chat & support: TheBlock's Discord server</a></p>
|
26 |
</div>
|
27 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
28 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlockAI">Want to contribute? TheBlock's Patreon page</a></p>
|
29 |
</div>
|
30 |
</div>
|
31 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBlock's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
32 |
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
33 |
<!-- header end -->
|
34 |
|
|
|
66 |
<!-- repositories-available start -->
|
67 |
## Repositories available
|
68 |
|
69 |
+
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF)
|
|
|
|
|
70 |
* [Mistral AI_'s original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2)
|
71 |
<!-- repositories-available end -->
|
72 |
|
|
|
110 |
|
111 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
112 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
113 |
+
| [mistral-7b-instruct-v0.2.Q2_K.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
|
114 |
+
| [mistral-7b-instruct-v0.2.Q3_K_S.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
|
115 |
+
| [mistral-7b-instruct-v0.2.Q3_K_M.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
|
116 |
+
| [mistral-7b-instruct-v0.2.Q3_K_L.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
|
117 |
+
| [mistral-7b-instruct-v0.2.Q4_0.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
118 |
+
| [mistral-7b-instruct-v0.2.Q4_K_S.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
|
119 |
+
| [mistral-7b-instruct-v0.2.Q4_K_M.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
|
120 |
+
| [mistral-7b-instruct-v0.2.Q5_0.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
121 |
+
| [mistral-7b-instruct-v0.2.Q5_K_S.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
|
122 |
+
| [mistral-7b-instruct-v0.2.Q5_K_M.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
|
123 |
+
| [mistral-7b-instruct-v0.2.Q6_K.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
|
124 |
+
| [mistral-7b-instruct-v0.2.Q8_0.gguf](https://huggingface.co/TheBlock/Mistral-7B-Instruct-v0.2-GGUF/blob/main/mistral-7b-instruct-v0.2.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
|
125 |
|
126 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
127 |
|
|
|
142 |
|
143 |
### In `text-generation-webui`
|
144 |
|
145 |
+
Under Download Model, you can enter the model repo: TheBlock/Mistral-7B-Instruct-v0.2-GGUF and below it, a specific filename to download, such as: mistral-7b-instruct-v0.2.Q4_K_M.gguf.
|
146 |
|
147 |
Then click Download.
|
148 |
|
|
|
157 |
Then you can download any individual model file to the current directory, at high speed, with a command like this:
|
158 |
|
159 |
```shell
|
160 |
+
huggingface-cli download TheBlock/Mistral-7B-Instruct-v0.2-GGUF mistral-7b-instruct-v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
161 |
```
|
162 |
|
163 |
<details>
|
|
|
166 |
You can also download multiple files at once with a pattern:
|
167 |
|
168 |
```shell
|
169 |
+
huggingface-cli download TheBlock/Mistral-7B-Instruct-v0.2-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
|
170 |
```
|
171 |
|
172 |
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
|
|
|
180 |
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
|
181 |
|
182 |
```shell
|
183 |
+
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBlock/Mistral-7B-Instruct-v0.2-GGUF mistral-7b-instruct-v0.2.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
|
184 |
```
|
185 |
|
186 |
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
|
|
|
289 |
|
290 |
For further support, and discussions on these models and AI in general, join us at:
|
291 |
|
292 |
+
[TheBlock AI's Discord server](https://discord.gg/TheBlokeai)
|
293 |
|
294 |
## Thanks, and how to contribute
|
295 |
|