Update for Transformers GPTQ support
Browse files- README.md +25 -18
- config.json +31 -21
- chronos-33b-GPTQ-4bit--1g.act.order.safetensors → model.safetensors +2 -2
- quantize_config.json +7 -6
README.md
CHANGED
@@ -10,17 +10,20 @@ tags:
|
|
10 |
---
|
11 |
|
12 |
<!-- header start -->
|
13 |
-
|
14 |
-
|
|
|
15 |
</div>
|
16 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
17 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
18 |
-
<p><a href="https://discord.gg/theblokeai">Chat & support:
|
19 |
</div>
|
20 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
21 |
-
<p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
22 |
</div>
|
23 |
</div>
|
|
|
|
|
24 |
<!-- header end -->
|
25 |
|
26 |
# Elinas' Chronos 33B GPTQ
|
@@ -171,6 +174,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
|
|
171 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
172 |
|
173 |
<!-- footer start -->
|
|
|
174 |
## Discord
|
175 |
|
176 |
For further support, and discussions on these models and AI in general, join us at:
|
@@ -190,12 +194,15 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
|
|
190 |
* Patreon: https://patreon.com/TheBlokeAI
|
191 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
192 |
|
193 |
-
**Special thanks to**:
|
|
|
|
|
194 |
|
195 |
-
**Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
|
196 |
|
197 |
Thank you to all my generous patrons and donaters!
|
198 |
|
|
|
|
|
199 |
<!-- footer end -->
|
200 |
|
201 |
# Original model card: Elinas' Chronos 33B
|
@@ -221,7 +228,7 @@ Your instruction or question here.
|
|
221 |
[4bit GPTQ Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-33b-GPTQ)
|
222 |
|
223 |
<!--**Support My Development of New Models**
|
224 |
-
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
|
225 |
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
|
226 |
|
227 |
--
|
@@ -304,11 +311,11 @@ Hyperparameters for the model architecture
|
|
304 |
</tr>
|
305 |
<tr>
|
306 |
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
|
307 |
-
</tr>
|
308 |
</thead>
|
309 |
-
<tbody>
|
310 |
<tr>
|
311 |
-
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
|
312 |
</tr>
|
313 |
<tr>
|
314 |
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
|
@@ -318,13 +325,13 @@ Hyperparameters for the model architecture
|
|
318 |
</tr>
|
319 |
<tr>
|
320 |
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
|
321 |
-
</tr>
|
322 |
</tbody>
|
323 |
</table>
|
324 |
|
325 |
*Table 1 - Summary of LLama Model Hyperparameters*
|
326 |
|
327 |
-
We present our results on eight standard common sense reasoning benchmarks in the table below.
|
328 |
<table>
|
329 |
<thead>
|
330 |
<tr>
|
@@ -332,23 +339,23 @@ We present our results on eight standard common sense reasoning benchmarks in th
|
|
332 |
</tr>
|
333 |
<tr>
|
334 |
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
|
335 |
-
</tr>
|
336 |
</thead>
|
337 |
-
<tbody>
|
338 |
-
<tr>
|
339 |
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
|
340 |
-
</th>
|
341 |
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
|
342 |
</th>
|
343 |
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
|
344 |
</th>
|
345 |
-
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
|
346 |
</tbody>
|
347 |
</table>
|
348 |
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
|
349 |
|
350 |
|
351 |
-
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
|
352 |
|
353 |
|
354 |
| No | Category | FAIR LLM |
|
|
|
10 |
---
|
11 |
|
12 |
<!-- header start -->
|
13 |
+
<!-- 200823 -->
|
14 |
+
<div style="width: auto; margin-left: auto; margin-right: auto">
|
15 |
+
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
|
16 |
</div>
|
17 |
<div style="display: flex; justify-content: space-between; width: 100%;">
|
18 |
<div style="display: flex; flex-direction: column; align-items: flex-start;">
|
19 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
|
20 |
</div>
|
21 |
<div style="display: flex; flex-direction: column; align-items: flex-end;">
|
22 |
+
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
|
23 |
</div>
|
24 |
</div>
|
25 |
+
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
|
26 |
+
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
|
27 |
<!-- header end -->
|
28 |
|
29 |
# Elinas' Chronos 33B GPTQ
|
|
|
174 |
ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
175 |
|
176 |
<!-- footer start -->
|
177 |
+
<!-- 200823 -->
|
178 |
## Discord
|
179 |
|
180 |
For further support, and discussions on these models and AI in general, join us at:
|
|
|
194 |
* Patreon: https://patreon.com/TheBlokeAI
|
195 |
* Ko-Fi: https://ko-fi.com/TheBlokeAI
|
196 |
|
197 |
+
**Special thanks to**: Aemon Algiz.
|
198 |
+
|
199 |
+
**Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
|
200 |
|
|
|
201 |
|
202 |
Thank you to all my generous patrons and donaters!
|
203 |
|
204 |
+
And thank you again to a16z for their generous grant.
|
205 |
+
|
206 |
<!-- footer end -->
|
207 |
|
208 |
# Original model card: Elinas' Chronos 33B
|
|
|
228 |
[4bit GPTQ Version provided by @TheBloke](https://huggingface.co/TheBloke/chronos-33b-GPTQ)
|
229 |
|
230 |
<!--**Support My Development of New Models**
|
231 |
+
<a href='https://ko-fi.com/Q5Q6MB734' target='_blank'><img height='36' style='border:0px;height:36px;'
|
232 |
src='https://storage.ko-fi.com/cdn/kofi1.png?v=3' border='0' alt='Support Development' /></a>-->
|
233 |
|
234 |
--
|
|
|
311 |
</tr>
|
312 |
<tr>
|
313 |
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
|
314 |
+
</tr>
|
315 |
</thead>
|
316 |
+
<tbody>
|
317 |
<tr>
|
318 |
+
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
|
319 |
</tr>
|
320 |
<tr>
|
321 |
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
|
|
|
325 |
</tr>
|
326 |
<tr>
|
327 |
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
|
328 |
+
</tr>
|
329 |
</tbody>
|
330 |
</table>
|
331 |
|
332 |
*Table 1 - Summary of LLama Model Hyperparameters*
|
333 |
|
334 |
+
We present our results on eight standard common sense reasoning benchmarks in the table below.
|
335 |
<table>
|
336 |
<thead>
|
337 |
<tr>
|
|
|
339 |
</tr>
|
340 |
<tr>
|
341 |
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
|
342 |
+
</tr>
|
343 |
</thead>
|
344 |
+
<tbody>
|
345 |
+
<tr>
|
346 |
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
|
347 |
+
</th>
|
348 |
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
|
349 |
</th>
|
350 |
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
|
351 |
</th>
|
352 |
+
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
|
353 |
</tbody>
|
354 |
</table>
|
355 |
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
|
356 |
|
357 |
|
358 |
+
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
|
359 |
|
360 |
|
361 |
| No | Category | FAIR LLM |
|
config.json
CHANGED
@@ -1,23 +1,33 @@
|
|
1 |
{
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
}
|
|
|
1 |
{
|
2 |
+
"_name_or_path": "elinas/llama-30b-hf-transformers-4.29",
|
3 |
+
"architectures": [
|
4 |
+
"LlamaForCausalLM"
|
5 |
+
],
|
6 |
+
"bos_token_id": 1,
|
7 |
+
"eos_token_id": 2,
|
8 |
+
"hidden_act": "silu",
|
9 |
+
"hidden_size": 6656,
|
10 |
+
"initializer_range": 0.02,
|
11 |
+
"intermediate_size": 17920,
|
12 |
+
"max_position_embeddings": 2048,
|
13 |
+
"model_type": "llama",
|
14 |
+
"num_attention_heads": 52,
|
15 |
+
"num_hidden_layers": 60,
|
16 |
+
"pad_token_id": 0,
|
17 |
+
"rms_norm_eps": 1e-06,
|
18 |
+
"tie_word_embeddings": false,
|
19 |
+
"torch_dtype": "float16",
|
20 |
+
"transformers_version": "4.28.1",
|
21 |
+
"use_cache": true,
|
22 |
+
"vocab_size": 32000,
|
23 |
+
"quantization_config": {
|
24 |
+
"bits": 4,
|
25 |
+
"group_size": -1,
|
26 |
+
"damp_percent": 0.01,
|
27 |
+
"desc_act": true,
|
28 |
+
"sym": true,
|
29 |
+
"true_sequential": true,
|
30 |
+
"model_file_base_name": "model",
|
31 |
+
"quant_method": "gptq"
|
32 |
+
}
|
33 |
}
|
chronos-33b-GPTQ-4bit--1g.act.order.safetensors → model.safetensors
RENAMED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:762600649e5e909640a6cbbb4e8ce1796977b3ae9dc8788cbeddbd3b93178c47
|
3 |
+
size 16940128464
|
quantize_config.json
CHANGED
@@ -1,8 +1,9 @@
|
|
1 |
{
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
|
|
8 |
}
|
|
|
1 |
{
|
2 |
+
"bits": 4,
|
3 |
+
"group_size": -1,
|
4 |
+
"damp_percent": 0.01,
|
5 |
+
"desc_act": true,
|
6 |
+
"sym": true,
|
7 |
+
"true_sequential": true,
|
8 |
+
"model_file_base_name": "model"
|
9 |
}
|