Text Generation
Transformers
English
llama
TheBloke commited on
Commit
535435c
1 Parent(s): 1e89bca

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +64 -47
README.md CHANGED
@@ -16,17 +16,20 @@ quantized_by: TheBloke
16
  ---
17
 
18
  <!-- header start -->
19
- <div style="width: 100%;">
20
- <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
 
21
  </div>
22
  <div style="display: flex; justify-content: space-between; width: 100%;">
23
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
24
- <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
25
  </div>
26
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
27
- <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
28
  </div>
29
  </div>
 
 
30
  <!-- header end -->
31
 
32
  # Orca Mini v3 70B - GGML
@@ -37,6 +40,14 @@ quantized_by: TheBloke
37
 
38
  This repo contains GGML format model files for [Pankaj Mathur's Orca Mini v3 70B](https://huggingface.co/psmathur/orca_mini_v3_70b).
39
 
 
 
 
 
 
 
 
 
40
  GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
41
  * [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
42
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
@@ -48,7 +59,8 @@ GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NV
48
  ## Repositories available
49
 
50
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ)
51
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML)
 
52
  * [Pankaj Mathur's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_v3_70b)
53
 
54
  ## Prompt template: Orca-Hashes
@@ -61,12 +73,17 @@ GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NV
61
  {prompt}
62
 
63
  ### Assistant:
 
64
  ```
65
 
66
  <!-- compatibility_ggml start -->
67
  ## Compatibility
68
 
69
- ### Requires llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) or later.
 
 
 
 
70
 
71
  Or one of the other tools and libraries listed above.
72
 
@@ -95,52 +112,24 @@ Refer to the Provided Files table below to see what files use which methods, and
95
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
96
  | ---- | ---- | ---- | ---- | ---- | ----- |
97
  | [orca_mini_v3_70b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
98
- | [orca_mini_v3_70b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
99
- | [orca_mini_v3_70b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
100
  | [orca_mini_v3_70b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
 
 
101
  | [orca_mini_v3_70b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
102
- | [orca_mini_v3_70b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
103
- | [orca_mini_v3_70b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
104
  | [orca_mini_v3_70b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
 
 
105
  | [orca_mini_v3_70b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
106
- | [orca_mini_v3_70b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
107
  | [orca_mini_v3_70b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
108
- | orca_mini_v3_70b.ggmlv3.q5_1.bin | q5_1 | 5 | 51.76 GB | 54.26 GB | Original quant method, 5-bit. Higher accuracy, slower inference than q5_0. |
109
- | orca_mini_v3_70b.ggmlv3.q6_K.bin | q6_K | 6 | 56.59 GB | 59.09 GB | New k-quant method. Uses GGML_TYPE_Q8_K - 6-bit quantization - for all tensors |
110
- | orca_mini_v3_70b.ggmlv3.q8_0.bin | q8_0 | 8 | 73.23 GB | 75.73 GB | Original llama.cpp quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
111
 
112
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
113
 
114
- ### q5_1, q6_K and q8_0 files require expansion from archive
115
 
116
- **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q6_K and q8_0 files as multi-part ZIP files. They are not compressed, they are just for storing a .bin file in two parts.
117
 
118
- <details>
119
- <summary>Click for instructions regarding q5_1, q6_K and q8_0 files</summary>
120
-
121
- ### q5_1
122
- Please download:
123
- * `orca_mini_v3_70b.ggmlv3.q5_1.zip`
124
- * `orca_mini_v3_70b.ggmlv3.q5_1.z01`
125
-
126
- ### q6_K
127
- Please download:
128
- * `orca_mini_v3_70b.ggmlv3.q6_K.zip`
129
- * `orca_mini_v3_70b.ggmlv3.q6_K.z01`
130
-
131
- ### q8_0
132
- Please download:
133
- * `orca_mini_v3_70b.ggmlv3.q8_0.zip`
134
- * `orca_mini_v3_70b.ggmlv3.q8_0.z01`
135
-
136
- Then extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
137
- ```
138
- sudo apt update -y && sudo apt install 7zip
139
- 7zz x orca_mini_v3_70b.ggmlv3.q6_K.zip
140
- ```
141
- </details>
142
-
143
- ## How to run in `llama.cpp`
144
 
145
  I use the following command line; adjust for your tastes and needs:
146
 
@@ -164,6 +153,7 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
164
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
165
 
166
  <!-- footer start -->
 
167
  ## Discord
168
 
169
  For further support, and discussions on these models and AI in general, join us at:
@@ -185,11 +175,13 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
185
 
186
  **Special thanks to**: Aemon Algiz.
187
 
188
- **Patreon special mentions**: Ajan Kanaga, David Ziegler, Raymond Fosdick, SuperWojo, Sam, webtim, Steven Wood, knownsqashed, Tony Hughes, Junyu Yang, J, Olakabola, Dan Guido, Stephen Murray, John Villwock, vamX, William Sang, Sean Connelly, LangChain4j, Olusegun Samson, Fen Risland, Derek Yates, Karl Bernard, transmissions 11, Trenton Dambrowitz, Pieter, Preetika Verma, Swaroop Kallakuri, Andrey, Slarti, Jonathan Leane, Michael Levine, Kalila, Joseph William Delisle, Rishabh Srivastava, Deo Leter, Luke Pendergrass, Spencer Kim, Geoffrey Montalvo, Thomas Belote, Jeffrey Morgan, Mandus, ya boyyy, Matthew Berman, Magnesian, Ai Maven, senxiiz, Alps Aficionado, Luke @flexchar, Raven Klaugh, Imad Khwaja, Gabriel Puliatti, Johann-Peter Hartmann, usrbinkat, Spiking Neurons AB, Artur Olbinski, chris gileta, danny, Willem Michiel, WelcomeToTheClub, Deep Realms, alfie_i, Dave, Leonard Tan, NimbleBox.ai, Randy H, Daniel P. Andersen, Pyrater, Will Dee, Elle, Space Cruiser, Gabriel Tamborski, Asp the Wyvern, Illia Dulskyi, Nikolai Manek, Sid, Brandon Frisco, Nathan LeClaire, Edmond Seymore, Enrico Ros, Pedro Madruga, Eugene Pentland, John Detwiler, Mano Prime, Stanislav Ovsiannikov, Alex, Vitor Caleffi, K, biorpg, Michael Davis, Lone Striker, Pierre Kircher, theTransient, Fred von Graf, Sebastain Graf, Vadim, Iucharbius, Clay Pascal, Chadd, Mesiah Bishop, terasurfer, Rainer Wilmers, Alexandros Triantafyllidis, Stefan Sabev, Talal Aujan, Cory Kujawski, Viktor Bowallius, subjectnull, ReadyPlayerEmma, zynix
189
 
190
 
191
  Thank you to all my generous patrons and donaters!
192
 
 
 
193
  <!-- footer end -->
194
 
195
  # Original model card: Pankaj Mathur's Orca Mini v3 70B
@@ -199,9 +191,33 @@ Thank you to all my generous patrons and donaters!
199
 
200
  A Llama2-70b model trained on Orca Style datasets.
201
 
202
- #### legal disclaimer:
203
 
204
- This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
205
 
206
  ## Evaluation
207
 
@@ -219,7 +235,7 @@ Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](htt
219
  |**Total Average**|-|**0.722175**||
220
 
221
 
222
- **P.S. I am actively seeking sponsorship and partnership opportunities. If you're interested, please connect with me at www.linkedin.com/in/pankajam.**
223
 
224
  ## Example Usage
225
 
@@ -262,6 +278,7 @@ print(tokenizer.decode(output[0], skip_special_tokens=True))
262
 
263
  ```
264
 
 
265
 
266
  #### Limitations & Biases:
267
 
@@ -271,7 +288,7 @@ Despite diligent efforts in refining the pretraining data, there remains a possi
271
 
272
  Exercise caution and cross-check information when necessary.
273
 
274
-
275
 
276
  ### Citiation:
277
 
 
16
  ---
17
 
18
  <!-- header start -->
19
+ <!-- 200823 -->
20
+ <div style="width: auto; margin-left: auto; margin-right: auto">
21
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
22
  </div>
23
  <div style="display: flex; justify-content: space-between; width: 100%;">
24
  <div style="display: flex; flex-direction: column; align-items: flex-start;">
25
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
26
  </div>
27
  <div style="display: flex; flex-direction: column; align-items: flex-end;">
28
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
29
  </div>
30
  </div>
31
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
32
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
33
  <!-- header end -->
34
 
35
  # Orca Mini v3 70B - GGML
 
40
 
41
  This repo contains GGML format model files for [Pankaj Mathur's Orca Mini v3 70B](https://huggingface.co/psmathur/orca_mini_v3_70b).
42
 
43
+ ### Important note regarding GGML files.
44
+
45
+ The GGML format has now been superseded by GGUF. As of August 21st 2023, [llama.cpp](https://github.com/ggerganov/llama.cpp) no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
46
+
47
+ Please use the GGUF models instead.
48
+
49
+ ### About GGML
50
+
51
  GPU acceleration is now available for Llama 2 70B GGML files, with both CUDA (NVidia) and Metal (macOS). The following clients/libraries are known to work with these files, including with GPU acceleration:
52
  * [llama.cpp](https://github.com/ggerganov/llama.cpp), commit `e76d630` and later.
53
  * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI.
 
59
  ## Repositories available
60
 
61
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ)
62
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGUF)
63
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML)
64
  * [Pankaj Mathur's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/psmathur/orca_mini_v3_70b)
65
 
66
  ## Prompt template: Orca-Hashes
 
73
  {prompt}
74
 
75
  ### Assistant:
76
+
77
  ```
78
 
79
  <!-- compatibility_ggml start -->
80
  ## Compatibility
81
 
82
+ ### Works with llama.cpp [commit `e76d630`](https://github.com/ggerganov/llama.cpp/commit/e76d630df17e235e6b9ef416c45996765d2e36fb) until August 21st, 2023
83
+
84
+ Will not work with `llama.cpp` after commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa).
85
+
86
+ For compatibility with latest llama.cpp, please use GGUF files instead.
87
 
88
  Or one of the other tools and libraries listed above.
89
 
 
112
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
113
  | ---- | ---- | ---- | ---- | ---- | ----- |
114
  | [orca_mini_v3_70b.ggmlv3.q2_K.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q2_K.bin) | q2_K | 2 | 28.59 GB| 31.09 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors. |
 
 
115
  | [orca_mini_v3_70b.ggmlv3.q3_K_S.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q3_K_S.bin) | q3_K_S | 3 | 29.75 GB| 32.25 GB | New k-quant method. Uses GGML_TYPE_Q3_K for all tensors |
116
+ | [orca_mini_v3_70b.ggmlv3.q3_K_M.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q3_K_M.bin) | q3_K_M | 3 | 33.04 GB| 35.54 GB | New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
117
+ | [orca_mini_v3_70b.ggmlv3.q3_K_L.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q3_K_L.bin) | q3_K_L | 3 | 36.15 GB| 38.65 GB | New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K |
118
  | [orca_mini_v3_70b.ggmlv3.q4_0.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q4_0.bin) | q4_0 | 4 | 38.87 GB| 41.37 GB | Original quant method, 4-bit. |
 
 
119
  | [orca_mini_v3_70b.ggmlv3.q4_K_S.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q4_K_S.bin) | q4_K_S | 4 | 38.87 GB| 41.37 GB | New k-quant method. Uses GGML_TYPE_Q4_K for all tensors |
120
+ | [orca_mini_v3_70b.ggmlv3.q4_K_M.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q4_K_M.bin) | q4_K_M | 4 | 41.38 GB| 43.88 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K |
121
+ | [orca_mini_v3_70b.ggmlv3.q4_1.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q4_1.bin) | q4_1 | 4 | 43.17 GB| 45.67 GB | Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
122
  | [orca_mini_v3_70b.ggmlv3.q5_0.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q5_0.bin) | q5_0 | 5 | 47.46 GB| 49.96 GB | Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference. |
 
123
  | [orca_mini_v3_70b.ggmlv3.q5_K_S.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q5_K_S.bin) | q5_K_S | 5 | 47.46 GB| 49.96 GB | New k-quant method. Uses GGML_TYPE_Q5_K for all tensors |
124
+ | [orca_mini_v3_70b.ggmlv3.q5_K_M.bin](https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML/blob/main/orca_mini_v3_70b.ggmlv3.q5_K_M.bin) | q5_K_M | 5 | 48.75 GB| 51.25 GB | New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K |
 
 
125
 
126
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
127
 
128
+ ## How to run in `llama.cpp`
129
 
130
+ Make sure you are using `llama.cpp` from commit [dadbed99e65252d79f81101a392d0d6497b86caa](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa) or earlier.
131
 
132
+ For compatibility with latest llama.cpp, please use GGUF files instead.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
133
 
134
  I use the following command line; adjust for your tastes and needs:
135
 
 
153
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
154
 
155
  <!-- footer start -->
156
+ <!-- 200823 -->
157
  ## Discord
158
 
159
  For further support, and discussions on these models and AI in general, join us at:
 
175
 
176
  **Special thanks to**: Aemon Algiz.
177
 
178
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
179
 
180
 
181
  Thank you to all my generous patrons and donaters!
182
 
183
+ And thank you again to a16z for their generous grant.
184
+
185
  <!-- footer end -->
186
 
187
  # Original model card: Pankaj Mathur's Orca Mini v3 70B
 
191
 
192
  A Llama2-70b model trained on Orca Style datasets.
193
 
 
194
 
195
+ <br>
196
+
197
+ ![orca-mini](https://huggingface.co/psmathur/orca_mini_v3_70b/resolve/main/orca_minis_small.jpeg)
198
+
199
+
200
+ <br>
201
+
202
+ **P.S. If you're interested to collaborate, please connect with me at www.linkedin.com/in/pankajam.**
203
+
204
+ <br>
205
+
206
+ ### quantized versions
207
+
208
+ Big thanks to [@TheBloke](https://huggingface.co/TheBloke)
209
+
210
+ 1) https://huggingface.co/TheBloke/orca_mini_v3_70B-GGML
211
+
212
+ 2) https://huggingface.co/TheBloke/orca_mini_v3_70B-GPTQ
213
+
214
+ <br>
215
+
216
+ #### license disclaimer:
217
+
218
+ This model is bound by the license & usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.
219
+
220
+ <br>
221
 
222
  ## Evaluation
223
 
 
235
  |**Total Average**|-|**0.722175**||
236
 
237
 
238
+ <br>
239
 
240
  ## Example Usage
241
 
 
278
 
279
  ```
280
 
281
+ <br>
282
 
283
  #### Limitations & Biases:
284
 
 
288
 
289
  Exercise caution and cross-check information when necessary.
290
 
291
+ <br>
292
 
293
  ### Citiation:
294