File size: 3,406 Bytes
094c4d8 147cd41 ebfe772 094c4d8 eee0218 094c4d8 1e84432 094c4d8 e06f9d0 094c4d8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
library_name: transformers
tags:
- mistral
- quantized
- text-generation-inference
- roleplay
# - rp
# - uncensored
pipeline_tag: text-generation
inference: false
# language:
# - en
# FILL THE INFORMATION:
# Reference: ChaoticNeutrals/Layris_9B
# Author: ChaoticNeutrals
# Model: Layris_9B
# Llama.cpp version: b2350
---
> [!TIP]
> **Support:** <br>
> My upload speeds have been cooked and unstable lately. <br>
> Realistically I'd need to move to get a better provider. <br>
> If you **want** and you are able to... <br>
> [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous) <br>
> I apologize for disrupting your experience.
```python
quantization_options = [
"Q4_K_M", "Q4_K_S", "IQ4_XS", "Q5_K_M",
"Q5_K_S", "Q6_K", "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XS", "IQ3_XXS"
]
```
## GGUF-Imatrix quantizations for [ChaoticNeutrals/Layris_9B](https://huggingface.co/ChaoticNeutrals/Layris_9B/).
All credits belong to the author.
If you liked these, check out the work with [FantasiaFoundry's GGUF-IQ-Imatrix-Quantization-Script](https://huggingface.co/FantasiaFoundry/GGUF-Quantization-Script).
**Personal note:**
This model should give you less refusals, given it's merged with the unhinged **Layla-V4**.
## What does "Imatrix" mean?
It stands for **Importance Matrix**, a technique used to improve the quality of quantized models. <br>
[[1]](https://github.com/ggerganov/llama.cpp/discussions/5006/) <br>
The **Imatrix** is calculated based on calibration data, and it helps determine the importance of different model activations during the quantization process. The idea is to preserve the most important information during quantization, which can help reduce the loss of model performance and lead to better quality preservation, especially when the calibration data is diverse. <br>
[[2]](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384/)
For --imatrix data, included `imatrix.dat` was used.
Using [llama.cpp-b2350](https://github.com/ggerganov/llama.cpp/releases/tag/b2350/):
```
Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
```
The new **IQ3_S** quant-option has shown to be better than the old Q3_K_S, so I added that instead of the later. Only supported in `koboldcpp-1.59.1` or higher.
If you want any specific quantization to be added, feel free to ask.
## Model card:
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/aK81BYLc8LzspT5h68hET.jpeg)
## Original model information:
# Layris
![image/jpeg](https://i.imgur.com/yRGzsoO.jpeg)
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [ChaoticNeutrals/Eris_Remix_7B](https://huggingface.co/ChaoticNeutrals/Eris_Remix_7B)
* [l3utterfly/mistral-7b-v0.1-layla-v4](https://huggingface.co/l3utterfly/mistral-7b-v0.1-layla-v4)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: ChaoticNeutrals/Eris_Remix_7B
layer_range: [0, 20]
- sources:
- model: l3utterfly/mistral-7b-v0.1-layla-v4
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
```
|