Triangle104 commited on
Commit
de1b20a
1 Parent(s): c717006

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -12,6 +12,56 @@ tags:
12
  This model was converted to GGUF format from [`Silvelter/Yomiel-22B`](https://huggingface.co/Silvelter/Yomiel-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/Silvelter/Yomiel-22B) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
 
12
  This model was converted to GGUF format from [`Silvelter/Yomiel-22B`](https://huggingface.co/Silvelter/Yomiel-22B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/Silvelter/Yomiel-22B) for more details on the model.
14
 
15
+ Merge Method
16
+ -
17
+ This model was merged using the della_linear merge method using ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 as a base.
18
+
19
+ Models Merged
20
+ -
21
+ The following models were included in the merge:
22
+
23
+ nbeerbower/Mistral-Small-Drummer-22B
24
+ gghfez/SeminalRP-22b
25
+ TheDrummer/Cydonia-22B-v1.1
26
+ anthracite-org/magnum-v4-22b
27
+
28
+ Configuration
29
+ -
30
+ The following YAML configuration was used to produce this model:
31
+
32
+ base_model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
33
+ parameters:
34
+ epsilon: 0.04
35
+ lambda: 1.05
36
+ int8_mask: true
37
+ rescale: true
38
+ normalize: false
39
+ dtype: bfloat16
40
+ tokenizer_source: base
41
+ merge_method: della_linear
42
+ models:
43
+ - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1
44
+ parameters:
45
+ weight: [0.2, 0.3, 0.2, 0.3, 0.2]
46
+ density: [0.45, 0.55, 0.45, 0.55, 0.45]
47
+ - model: gghfez/SeminalRP-22b
48
+ parameters:
49
+ weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421]
50
+ density: [0.6, 0.4, 0.5, 0.4, 0.6]
51
+ - model: anthracite-org/magnum-v4-22b
52
+ parameters:
53
+ weight: [0.208, 0.139, 0.139, 0.139, 0.208]
54
+ density: [0.7]
55
+ - model: TheDrummer/Cydonia-22B-v1.1
56
+ parameters:
57
+ weight: [0.208, 0.139, 0.139, 0.139, 0.208]
58
+ density: [0.7]
59
+ - model: nbeerbower/Mistral-Small-Drummer-22B
60
+ parameters:
61
+ weight: [0.33]
62
+ density: [0.45, 0.55, 0.45, 0.55, 0.45]
63
+
64
+ ---
65
  ## Use with llama.cpp
66
  Install llama.cpp through brew (works on Mac and Linux)
67