Transformers
GGUF
English
mistral
Merge
Lewdiculous commited on
Commit
0e8937b
1 Parent(s): cbbdce0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -25
README.md CHANGED
@@ -1,72 +1,63 @@
1
  ---
2
  library_name: transformers
3
  license: other
 
 
 
4
  language:
5
  - en
6
  tags:
7
- - gguf
8
- - quantized
9
- - roleplay
10
- - imatrix
11
  - mistral
12
  - merge
13
  inference: false
14
- base_model:
15
- - ResplendentAI/Datura_7B
16
- - ChaoticNeutrals/Eris_Floramix_DPO_7B
17
  ---
18
 
19
- This repository hosts GGUF-Imatrix quantizations for [Test157t/Eris-Daturamix-7b-v2](https://huggingface.co/Test157t/Eris-Daturamix-7b-v2).
20
  ```
21
  Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
22
  ```
23
- To be uploaded:
24
  ```python
25
  quantization_options = [
26
- "Q4_K_M", "IQ4_XS", "Q5_K_M", "Q5_K_S", "Q6_K",
27
  "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
28
  ]
29
  ```
30
 
31
- **This is experimental.**
32
 
33
- For imatrix data generation, kalomaze's `groups_merged.txt` with added roleplay chats was used, you can find it [here](https://huggingface.co/Lewdiculous/Datura_7B-GGUF-Imatrix/blob/main/imatrix-with-rp-format-data.txt).
34
 
35
- The goal is to measure the (hopefully positive) impact of this data for consistent formatting in roleplay chatting scenarios.
36
 
37
- **Alt-img:**
38
 
39
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/reCtUSmNO6S1mSjLms3kT.png)
40
 
41
  **Original model information:**
42
 
43
- So this should have been v1 but i done goofed, so heres "v2".
 
44
 
45
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/O8bRFd7O9TDgecUy-mIWb.png)
46
 
47
  The following models were included in the merge:
48
  * [ResplendentAI/Datura_7B](https://huggingface.co/ResplendentAI/Datura_7B)
49
- * [ChaoticNeutrals/Eris_Floramix_DPO_7B](https://huggingface.co/ChaoticNeutrals/Eris_Floramix_DPO_7B)
50
 
51
  ### Configuration
52
 
53
- The following YAML configuration was used to produce this model:
54
-
55
  ```yaml
56
  slices:
57
  - sources:
58
- - model: ChaoticNeutrals/Eris_Floramix_DPO_7B
59
  layer_range: [0, 32]
60
  - model: ResplendentAI/Datura_7B
61
  layer_range: [0, 32]
62
  merge_method: slerp
63
- base_model: ChaoticNeutrals/Eris_Floramix_DPO_7B
64
  parameters:
65
  t:
66
  - filter: self_attn
67
- value: [0, 0.5, 0.3, 0.7, 1]
68
- - filter: mlp
69
  value: [1, 0.5, 0.7, 0.3, 0]
70
  - value: 0.5
71
  dtype: bfloat16
72
- ```
 
1
  ---
2
  library_name: transformers
3
  license: other
4
+ datasets:
5
+ - athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED
6
+ - ResplendentAI/Synthetic_Soul_1k
7
  language:
8
  - en
9
  tags:
 
 
 
 
10
  - mistral
11
  - merge
12
  inference: false
13
+
14
+
15
+
16
  ---
17
 
18
+ This repository hosts GGUF-Imatrix quantizations for [Test157t/Eris-Daturamix-7b](https://huggingface.co/Test157t/Eris-Daturamix-7b).
19
  ```
20
  Base⇢ GGUF(F16)⇢ Imatrix-Data(F16)⇢ GGUF(Imatrix-Quants)
21
  ```
22
+
23
  ```python
24
  quantization_options = [
25
+ "Q4_K_M", "IQ4_XS", "Q5_K_M", "Q6_K",
26
  "Q8_0", "IQ3_M", "IQ3_S", "IQ3_XXS"
27
  ]
28
  ```
29
 
30
+ The goal is to measure the (hopefully positive) impact of this data for consistent formatting in roleplay chatting scenarios.
31
 
 
32
 
 
33
 
 
34
 
 
35
 
36
  **Original model information:**
37
 
38
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/FtEEuyGni5M-cxkYYBBHw.jpeg)
39
+
40
 
 
41
 
42
  The following models were included in the merge:
43
  * [ResplendentAI/Datura_7B](https://huggingface.co/ResplendentAI/Datura_7B)
44
+ * [Test157t/Eris-Floramix-7b](https://huggingface.co/Test157t/Eris-Floramix-7b)
45
 
46
  ### Configuration
47
 
 
 
48
  ```yaml
49
  slices:
50
  - sources:
51
+ - model: Test157t/Eris-Floramix-7b
52
  layer_range: [0, 32]
53
  - model: ResplendentAI/Datura_7B
54
  layer_range: [0, 32]
55
  merge_method: slerp
56
+ base_model: Test157t/Eris-Floramix-7b
57
  parameters:
58
  t:
59
  - filter: self_attn
 
 
60
  value: [1, 0.5, 0.7, 0.3, 0]
61
  - value: 0.5
62
  dtype: bfloat16
63
+ ```