Lewdiculous commited on
Commit
a182a7e
1 Parent(s): 15251fe

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags:
4
+ - mistral
5
+ - quantized
6
+ - text-generation-inference
7
+ - roleplay
8
+ # - rp
9
+ # - uncensored
10
+ pipeline_tag: text-generation
11
+ inference: false
12
+ # language:
13
+ # - en
14
+ # FILL THE INFORMATION:
15
+ # Reference: ChaoticNeutrals/Bepis_9B
16
+ # Author: ChaoticNeutrals
17
+ # Model: Bepis_9B
18
+ # Llama.cpp version: b2329
19
+ ---
20
+
21
+ ## GGUF-Imatrix quantizations for [ChaoticNeutrals/Bepis_9B](https://huggingface.co/ChaoticNeutrals/Bepis_9B/).
22
+
23
+ All credits belong to the author.
24
+
25
+ If you liked these, check out the work with [FantasiaFoundry's GGUF-IQ-Imatrix-Quantization-Script](https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script).
26
+
27
+ ## What does "Imatrix" mean?
28
+
29
+ It stands for **Importance Matrix**, a technique used to improve the quality of quantized models.
30
+ [[1]](https://github.com/ggerganov/llama.cpp/discussions/5006/) <br>
31
+ The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance and lead to better quality preservation, especially when the calibration data is diverse.
32
+ [[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384/)
33
+
34
+ For --imatrix data, included `imatrix.dat` was used.
35
+
36
+ Using [llama.cpp-b2329](https://github.com/ggerganov/llama.cpp/releases/tag/b2329/):
37
+
38
+ ```
39
+ Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
40
+ ```
41
+
42
+ The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher. `1.60` is just around the corner with support for **IQ4_XS**.
43
+
44
+ If you want any specific quantization to be added, feel free to ask.
45
+
46
+ <!-- ## Model image: -->
47
+
48
+ ## Original model information:
49
+
50
+
51
+ # Bepis
52
+
53
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/H0_oJhrIEGBIwogB77p5w.jpeg)
54
+
55
+ A new 9B model from jeiku. This one is smart, proficient at markdown, knows when to stop talking, and is quite soulful. The merge was an equal 3 way split between https://huggingface.co/ChaoticNeutrals/Prodigy_7B, https://huggingface.co/Test157t/Prima-LelantaclesV6-7b, and https://huggingface.co/cgato/Thespis-CurtainCall-7b-v0.2.1
56
+
57
+ If there's any 7B to 11B merge or finetune you'd like to see, feel free to leave a message.
58
+
59
+ The following YAML configuration was used to produce this model:
60
+
61
+ ```yaml
62
+ slices:
63
+ - sources:
64
+ - model: primathespis
65
+ layer_range: [0, 20]
66
+ - sources:
67
+ - model: prodigalthespis
68
+ layer_range: [12, 32]
69
+ merge_method: passthrough
70
+ dtype: float16
71
+
72
+ ```