Update README.md
Browse files
README.md
CHANGED
@@ -2,12 +2,18 @@
|
|
2 |
license: apache-2.0
|
3 |
language:
|
4 |
- en
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
[uploads in progress]
|
7 |
|
8 |
<h3> Master Files for Ultra High Quality Remasters of "Psyonic-Cetacean" 20B </h3>
|
9 |
|
10 |
-
|
|
|
|
|
11 |
|
12 |
https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF-imatrix
|
13 |
|
@@ -15,7 +21,11 @@ And
|
|
15 |
|
16 |
https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
19 |
GGUF and Imatrix level upscaling / adjustments which occuring during "GGUFing" processes.
|
20 |
|
21 |
If you use these to create your own GGUFs, please use "outfile" at F32 for best results.
|
@@ -23,6 +33,14 @@ If you use these to create your own GGUFs, please use "outfile" at F32 for best
|
|
23 |
Imatrix processes should use a stable dataset(s) of at least 500 "chunks" or more.
|
24 |
If smaller dataset(s) are used this may corrupt or reduce the quality of the Imatrix builds.
|
25 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
Doing this will ensure the quality of the upscale is maximized in the GGUFs.
|
27 |
|
28 |
Happy GGUFing, EXL2ing, GPTQing, AWQing, HQQing.
|
|
|
2 |
license: apache-2.0
|
3 |
language:
|
4 |
- en
|
5 |
+
tags:
|
6 |
+
- 32 bit upscale
|
7 |
+
- full 32 bit precision
|
8 |
+
- master files
|
9 |
---
|
10 |
[uploads in progress]
|
11 |
|
12 |
<h3> Master Files for Ultra High Quality Remasters of "Psyonic-Cetacean" 20B </h3>
|
13 |
|
14 |
+
May "Space Whale" swim in the oceans of the universe forever!
|
15 |
+
|
16 |
+
This repo contains the full precision (32 bit) master files for 32 bit upscales of:
|
17 |
|
18 |
https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF-imatrix
|
19 |
|
|
|
21 |
|
22 |
https://huggingface.co/DavidAU/Psyonic-Cetacean-Ultra-Quality-20b-GGUF
|
23 |
|
24 |
+
Please view either repo for details on the remaster's results, and other important infomation.
|
25 |
+
|
26 |
+
IMPORTANT NOTES:
|
27 |
+
|
28 |
+
These are "final" result files of the full precision rebuild minus
|
29 |
GGUF and Imatrix level upscaling / adjustments which occuring during "GGUFing" processes.
|
30 |
|
31 |
If you use these to create your own GGUFs, please use "outfile" at F32 for best results.
|
|
|
33 |
Imatrix processes should use a stable dataset(s) of at least 500 "chunks" or more.
|
34 |
If smaller dataset(s) are used this may corrupt or reduce the quality of the Imatrix builds.
|
35 |
|
36 |
+
Due to the precision remaster there will be "greater" distance between each quant - both
|
37 |
+
non imatrix and imatrix.
|
38 |
+
|
39 |
+
IE: The jump in quality, instruction following, "ai brainpower", nuance and output
|
40 |
+
between Q4 and Q5 and likewise Q5 and Q6 will be larger than normal.
|
41 |
+
|
42 |
+
Same applies to "Imatrix" quants.
|
43 |
+
|
44 |
Doing this will ensure the quality of the upscale is maximized in the GGUFs.
|
45 |
|
46 |
Happy GGUFing, EXL2ing, GPTQing, AWQing, HQQing.
|