aashish1904 commited on
Commit
5cb319b
1 Parent(s): 2f58e37

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +433 -0
README.md ADDED
@@ -0,0 +1,433 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model:
5
+ - Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
6
+ - Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
7
+ - tannedbum/L3-Nymeria-Maid-8B
8
+ - bluuwhale/L3-SthenoMaidBlackroot-8B-V1
9
+ - tannedbum/L3-Nymeria-8B
10
+ - Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
11
+ - Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
12
+ - Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
13
+ - migtissera/Llama-3-8B-Synthia-v3.5
14
+ - Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
15
+ - v000000/L3-8B-Poppy-Sunspice
16
+ - Magpie-Align/Llama-3-8B-WizardLM-196K
17
+ - Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B
18
+ - Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
19
+ - invisietch/EtherealRainbow-v0.3-8B
20
+ - crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
21
+ - aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
22
+ - ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
23
+ - Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
24
+ - Casual-Autopsy/Umbral-Mind-6
25
+ - ResplendentAI/Nymph_8B
26
+ library_name: transformers
27
+ tags:
28
+ - mergekit
29
+ - merge
30
+
31
+
32
+ ---
33
+
34
+ ![](https://cdn.discordapp.com/attachments/791342238541152306/1264099835221381251/image.png?ex=669ca436&is=669b52b6&hm=129f56187c31e1ed22cbd1bcdbc677a2baeea5090761d2f1a458c8b1ec7cca4b&)
35
+
36
+ # QuantFactory/L3-Umbral-Mind-RP-v3.0-8B-GGUF
37
+ This is quantized version of [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B) created using llama.cpp
38
+
39
+ # Original Model Card
40
+
41
+
42
+ <img src="https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3-8B/resolve/main/63073798_p0_master1200.jpg" style="display: block; margin: auto;">
43
+ Image by ろ47
44
+
45
+ # Merge
46
+
47
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
48
+
49
+ ## Merge Details
50
+
51
+ The goal of this merge was to make an RP model better suited for role-plays with heavy themes such as but not limited to:
52
+ - Mental illness
53
+ - Self-harm
54
+ - Trauma
55
+ - Suicide
56
+
57
+ I hated how RP models tended to be overly positive and hopeful with role-plays involving such themes,
58
+ but thanks to [failspy/Llama-3-8B-Instruct-MopeyMule](https://huggingface.co/failspy/Llama-3-8B-Instruct-MopeyMule) this problem has been lessened considerably.
59
+
60
+ If you're an enjoyer of savior/reverse savior type role-plays like myself, then this model is for you.
61
+
62
+ ### Usage Info
63
+
64
+ This model is meant to be used with asterisks/quotes RPing formats, any other format that isn't asterisks/quotes is likely to cause issues
65
+
66
+ ### Quants
67
+
68
+ * Weighted GGUFs by [mradermacher](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v3.0-8B-i1-GGUF)
69
+ * Static GGUFs by [mradermacher](https://huggingface.co/mradermacher/L3-Umbral-Mind-RP-v3.0-8B-GGUF)
70
+
71
+ ### Models Merged
72
+
73
+ The following models were included in the merge:
74
+ * [Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B)
75
+ * [Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B)
76
+ * [tannedbum/L3-Nymeria-Maid-8B](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B)
77
+ * [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
78
+ * [tannedbum/L3-Nymeria-8B](https://huggingface.co/tannedbum/L3-Nymeria-8B)
79
+ * [Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B)
80
+ * [Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B)
81
+ * [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2)
82
+ * [migtissera/Llama-3-8B-Synthia-v3.5](https://huggingface.co/migtissera/Llama-3-8B-Synthia-v3.5)
83
+ * [Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B](https://huggingface.co/Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B)
84
+ * [v000000/L3-8B-Poppy-Sunspice](https://huggingface.co/v000000/L3-8B-Poppy-Sunspice)
85
+ * [Magpie-Align/Llama-3-8B-WizardLM-196K](https://huggingface.co/Magpie-Align/Llama-3-8B-WizardLM-196K)
86
+ * [Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B](https://huggingface.co/Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B)
87
+ * [Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B)
88
+ * [invisietch/EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B)
89
+ * [crestf411/L3-8B-sunfall-v0.4-stheno-v3.2](https://huggingface.co/crestf411/L3-8B-sunfall-v0.4-stheno-v3.2)
90
+ * [aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K)
91
+ * [ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B)
92
+ * [Nitral-AI/Hathor_Tahsin-L3-8B-v0.85](https://huggingface.co/Nitral-AI/Hathor_Tahsin-L3-8B-v0.85)
93
+ * [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B)
94
+
95
+ ## Secret Sauce
96
+
97
+ The following YAML configurations were used to produce this model:
98
+
99
+ ### Umbral-Mind-1-pt.1
100
+
101
+ ```yaml
102
+ models:
103
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
104
+ - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
105
+ parameters:
106
+ density: 0.5
107
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
108
+ - model: tannedbum/L3-Nymeria-Maid-8B
109
+ parameters:
110
+ density: 0.5
111
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
112
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
113
+ parameters:
114
+ density: 0.5
115
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
116
+ - model: tannedbum/L3-Nymeria-8B
117
+ parameters:
118
+ density: 0.5
119
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
120
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
121
+ parameters:
122
+ density: 0.5
123
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
124
+ merge_method: dare_ties
125
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
126
+ parameters:
127
+ normalize: false
128
+ int8_mask: true
129
+ dtype: bfloat16
130
+ ```
131
+
132
+ ### Umbral-Mind-1-pt.2
133
+
134
+ ```yaml
135
+ models:
136
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
137
+ - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
138
+ parameters:
139
+ gamma: 0.01
140
+ density: 0.9
141
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
142
+ - model: tannedbum/L3-Nymeria-Maid-8B
143
+ parameters:
144
+ gamma: 0.01
145
+ density: 0.9
146
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
147
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
148
+ parameters:
149
+ gamma: 0.01
150
+ density: 0.9
151
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
152
+ - model: tannedbum/L3-Nymeria-8B
153
+ parameters:
154
+ gamma: 0.01
155
+ density: 0.9
156
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
157
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
158
+ parameters:
159
+ gamma: 0.01
160
+ density: 0.9
161
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
162
+ merge_method: breadcrumbs_ties
163
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v2.0-8B
164
+ parameters:
165
+ normalize: false
166
+ int8_mask: true
167
+ dtype: bfloat16
168
+ ```
169
+
170
+ ### Umbral-Mind-1
171
+
172
+ ```yaml
173
+ models:
174
+ - model: Casual-Autopsy/Umbral-Mind-1-pt.1
175
+ - model: Casual-Autopsy/Umbral-Mind-1-pt.2
176
+ merge_method: slerp
177
+ base_model: Casual-Autopsy/Umbral-Mind-1-pt.1
178
+ parameters:
179
+ t:
180
+ - filter: self_attn
181
+ value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
182
+ - filter: mlp
183
+ value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
184
+ - value: 0.5
185
+ dtype: bfloat16
186
+ ```
187
+
188
+ ### Umbral-Mind-2-pt.1
189
+
190
+ ```yaml
191
+ models:
192
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
193
+ - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
194
+ parameters:
195
+ density: 0.5
196
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
197
+ - model: migtissera/Llama-3-8B-Synthia-v3.5
198
+ parameters:
199
+ density: 0.5
200
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
201
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
202
+ parameters:
203
+ density: 0.5
204
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
205
+ - model: v000000/L3-8B-Poppy-Sunspice
206
+ parameters:
207
+ density: 0.5
208
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
209
+ - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B
210
+ parameters:
211
+ density: 0.5
212
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
213
+ merge_method: dare_ties
214
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
215
+ parameters:
216
+ normalize: false
217
+ int8_mask: true
218
+ dtype: bfloat16
219
+ ```
220
+
221
+ ### Umbral-Mind-2-pt.2
222
+
223
+ ```yaml
224
+ models:
225
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
226
+ - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-v2
227
+ parameters:
228
+ gamma: 0.01
229
+ density: 0.9
230
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
231
+ - model: migtissera/Llama-3-8B-Synthia-v3.5
232
+ parameters:
233
+ gamma: 0.01
234
+ density: 0.9
235
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
236
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-Blackroot-8B
237
+ parameters:
238
+ gamma: 0.01
239
+ density: 0.9
240
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
241
+ - model: Magpie-Align/Llama-3-8B-WizardLM-196K
242
+ parameters:
243
+ gamma: 0.01
244
+ density: 0.9
245
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
246
+ - model: Cas-Warehouse/Llama-3-Mopeyfied-Psychology-8B
247
+ parameters:
248
+ gamma: 0.01
249
+ density: 0.9
250
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
251
+ merge_method: breadcrumbs_ties
252
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v1.0-8B
253
+ parameters:
254
+ normalize: false
255
+ int8_mask: true
256
+ dtype: bfloat16
257
+ ```
258
+
259
+ ### Umbral-Mind-2
260
+
261
+ ```yaml
262
+ models:
263
+ - model: Casual-Autopsy/Umbral-Mind-2-pt.1
264
+ - model: Casual-Autopsy/Umbral-Mind-2-pt.2
265
+ merge_method: slerp
266
+ base_model: Casual-Autopsy/Umbral-Mind-2-pt.1
267
+ parameters:
268
+ t:
269
+ - filter: self_attn
270
+ value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
271
+ - filter: mlp
272
+ value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
273
+ - value: 0.5
274
+ dtype: bfloat16
275
+ ```
276
+
277
+ ### Umbral-Mind-3-pt.1
278
+
279
+ ```yaml
280
+ models:
281
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
282
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
283
+ parameters:
284
+ density: 0.5
285
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
286
+ - model: invisietch/EtherealRainbow-v0.3-8B
287
+ parameters:
288
+ density: 0.5
289
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
290
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
291
+ parameters:
292
+ density: 0.5
293
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
294
+ - model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
295
+ parameters:
296
+ density: 0.5
297
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
298
+ - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
299
+ parameters:
300
+ density: 0.5
301
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
302
+ merge_method: dare_ties
303
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
304
+ parameters:
305
+ normalize: false
306
+ int8_mask: true
307
+ dtype: bfloat16
308
+ ```
309
+
310
+ ### Umbral-Mind-3-pt.2
311
+
312
+ ```yaml
313
+ models:
314
+ - model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
315
+ - model: Cas-Warehouse/Llama-3-SOVL-MopeyMule-8B
316
+ parameters:
317
+ gamma: 0.01
318
+ density: 0.9
319
+ weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
320
+ - model: invisietch/EtherealRainbow-v0.3-8B
321
+ parameters:
322
+ gamma: 0.01
323
+ density: 0.9
324
+ weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
325
+ - model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
326
+ parameters:
327
+ gamma: 0.01
328
+ density: 0.9
329
+ weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
330
+ - model: crestf411/L3-8B-sunfall-v0.4-stheno-v3.2
331
+ parameters:
332
+ gamma: 0.01
333
+ density: 0.9
334
+ weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
335
+ - model: Cas-Warehouse/Llama-3-MopeyMule-Blackroot-8B
336
+ parameters:
337
+ gamma: 0.01
338
+ density: 0.9
339
+ weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
340
+ merge_method: breadcrumbs_ties
341
+ base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v0.3-8B
342
+ parameters:
343
+ normalize: false
344
+ int8_mask: true
345
+ dtype: bfloat16
346
+ ```
347
+
348
+ ### Umbral-Mind-3
349
+
350
+ ```yaml
351
+ models:
352
+ - model: Casual-Autopsy/Umbral-Mind-3-pt.1
353
+ - model: Casual-Autopsy/Umbral-Mind-3-pt.2
354
+ merge_method: slerp
355
+ base_model: Casual-Autopsy/Umbral-Mind-3-pt.1
356
+ parameters:
357
+ t:
358
+ - filter: self_attn
359
+ value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
360
+ - filter: mlp
361
+ value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
362
+ - value: 0.5
363
+ dtype: bfloat16
364
+ ```
365
+
366
+ ### Umbral-Mind-4
367
+
368
+ ```yaml
369
+ models:
370
+ - model: Casual-Autopsy/Umbral-Mind-1
371
+ - model: Casual-Autopsy/Umbral-Mind-3
372
+ merge_method: slerp
373
+ base_model: Casual-Autopsy/Umbral-Mind-1
374
+ parameters:
375
+ t:
376
+ - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
377
+ dtype: bfloat16
378
+ ```
379
+
380
+ ### Umbral-Mind-5
381
+
382
+ ```yaml
383
+ models:
384
+ - model: Casual-Autopsy/Umbral-Mind-4
385
+ - model: Casual-Autopsy/Umbral-Mind-2
386
+ merge_method: slerp
387
+ base_model: Casual-Autopsy/Umbral-Mind-4
388
+ parameters:
389
+ t:
390
+ - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
391
+ embed_slerp: true
392
+ dtype: bfloat16
393
+ ```
394
+
395
+ ### Umbral-Mind-6
396
+
397
+ ```yaml
398
+ models:
399
+ - model: mergekit-community/Umbral-Mind-5
400
+ - model: Casual-Autopsy/Mopey-Omelette
401
+ merge_method: slerp
402
+ base_model: mergekit-community/Umbral-Mind-5
403
+ parameters:
404
+ t:
405
+ - value: [0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2]
406
+ embed_slerp: true
407
+ dtype: bfloat16
408
+ ```
409
+
410
+ ### Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
411
+
412
+ ```yaml
413
+ models:
414
+ - model: Casual-Autopsy/Umbral-Mind-6
415
+ - model: aifeifei798/llama3-8B-DarkIdol-2.1-Uncensored-32K
416
+ parameters:
417
+ weight: [0.02, -0.01, -0.01, 0.02]
418
+ - model: ResplendentAI/Nymph_8B
419
+ parameters:
420
+ weight: [-0.01, 0.02, 0.02, -0.01]
421
+ - model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
422
+ parameters:
423
+ weight: [-0.01, 0.02, 0.02, -0.01]
424
+ - model: Nitral-AI/Hathor_Tahsin-L3-8B-v0.85
425
+ parameters:
426
+ weight: [0.02, -0.01, -0.01, 0.02]
427
+ merge_method: task_arithmetic
428
+ base_model: Casual-Autopsy/Umbral-Mind-6
429
+ parameters:
430
+ normalize: false
431
+ dtype: bfloat16
432
+ ```
433
+