InferenceIllusionist commited on
Commit
1924ac0
1 Parent(s): d5e3725

create read.md

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card for ohno-8x7B-GGUF
2
+ - Model creator: [rAIfle](https://huggingface.co/rAIfle)
3
+ - Original model: [ohno-8x7B-fp16](https://huggingface.co/rAIfle/ohno-8x7B-fp16)
4
+
5
+
6
+ <!-- Provide a quick summary of what the model is/does. -->
7
+
8
+ ohno-8x7B quantized with love.
9
+
10
+ Starting out with Q5_K_M, taking requests for any other quants.
11
+ **All quantizations based on original fp16 model.**
12
+
13
+
14
+ Any feedback is greatly appreciated!
15
+ ---
16
+ # Original Model Card
17
+
18
+ # ohno-8x7b
19
+ this... will either be my magnum opus... or terrible. no inbetweens!
20
+
21
+ Post-test verdict: It's mostly braindamaged. Might be my settings or something, idk.
22
+ the `./output` mentioned below is my own merge using identical recipe as [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B).
23
+
24
+ # output_merge2
25
+
26
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
27
+
28
+ ## Merge Details
29
+ ### Merge Method
30
+
31
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B) as a base.
32
+
33
+ ### Models Merged
34
+
35
+ The following models were included in the merge:
36
+ * ./output/ + /ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
37
+ * [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora)
38
+ * [Envoid/Mixtral-Instruct-ITR-8x7B](https://huggingface.co/Envoid/Mixtral-Instruct-ITR-8x7B) + [retrieval-bar/Mixtral-8x7B-v0.1_case-briefs](https://huggingface.co/retrieval-bar/Mixtral-8x7B-v0.1_case-briefs)
39
+ * [NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss](https://huggingface.co/NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss)
40
+
41
+ ### Configuration
42
+
43
+ The following YAML configuration was used to produce this model:
44
+
45
+ ```yaml
46
+ models:
47
+ - model: ./output/+/ai/LLM/tmp/pefts/daybreak-peft/mixtral-8x7b
48
+ parameters:
49
+ density: 0.66
50
+ weight: 1.0
51
+ - model: Envoid/Mixtral-Instruct-ITR-8x7B+retrieval-bar/Mixtral-8x7B-v0.1_case-briefs
52
+ parameters:
53
+ density: 0.1
54
+ weight: 0.25
55
+ - model: Envoid/Mixtral-Instruct-ITR-8x7B+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora
56
+ parameters:
57
+ density: 0.66
58
+ weight: 0.5
59
+ - model: NeverSleep/Noromaid-v0.4-Mixtral-Instruct-8x7b-Zloss
60
+ parameters:
61
+ density: 0.15
62
+ weight: 0.3
63
+ merge_method: dare_ties
64
+ base_model: Envoid/Mixtral-Instruct-ITR-8x7B
65
+ dtype: float16
66
+
67
+ ```