aashish1904 commited on
Commit
b747356
1 Parent(s): abc98e6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +188 -0
README.md ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ tags:
6
+ - mergekit
7
+ - merge
8
+ base_model:
9
+ - nbeerbower/Gemma2-Gutenberg-Doppel-9B
10
+ - ifable/gemma-2-Ifable-9B
11
+ - unsloth/gemma-2-9b-it
12
+ - wzhouad/gemma-2-9b-it-WPO-HB
13
+ model-index:
14
+ - name: Gemma-2-Ataraxy-v3i-9B
15
+ results:
16
+ - task:
17
+ type: text-generation
18
+ name: Text Generation
19
+ dataset:
20
+ name: IFEval (0-Shot)
21
+ type: HuggingFaceH4/ifeval
22
+ args:
23
+ num_few_shot: 0
24
+ metrics:
25
+ - type: inst_level_strict_acc and prompt_level_strict_acc
26
+ value: 42.03
27
+ name: strict accuracy
28
+ source:
29
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B
30
+ name: Open LLM Leaderboard
31
+ - task:
32
+ type: text-generation
33
+ name: Text Generation
34
+ dataset:
35
+ name: BBH (3-Shot)
36
+ type: BBH
37
+ args:
38
+ num_few_shot: 3
39
+ metrics:
40
+ - type: acc_norm
41
+ value: 38.24
42
+ name: normalized accuracy
43
+ source:
44
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B
45
+ name: Open LLM Leaderboard
46
+ - task:
47
+ type: text-generation
48
+ name: Text Generation
49
+ dataset:
50
+ name: MATH Lvl 5 (4-Shot)
51
+ type: hendrycks/competition_math
52
+ args:
53
+ num_few_shot: 4
54
+ metrics:
55
+ - type: exact_match
56
+ value: 0.15
57
+ name: exact match
58
+ source:
59
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: GPQA (0-shot)
66
+ type: Idavidrein/gpqa
67
+ args:
68
+ num_few_shot: 0
69
+ metrics:
70
+ - type: acc_norm
71
+ value: 10.4
72
+ name: acc_norm
73
+ source:
74
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B
75
+ name: Open LLM Leaderboard
76
+ - task:
77
+ type: text-generation
78
+ name: Text Generation
79
+ dataset:
80
+ name: MuSR (0-shot)
81
+ type: TAUR-Lab/MuSR
82
+ args:
83
+ num_few_shot: 0
84
+ metrics:
85
+ - type: acc_norm
86
+ value: 1.76
87
+ name: acc_norm
88
+ source:
89
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B
90
+ name: Open LLM Leaderboard
91
+ - task:
92
+ type: text-generation
93
+ name: Text Generation
94
+ dataset:
95
+ name: MMLU-PRO (5-shot)
96
+ type: TIGER-Lab/MMLU-Pro
97
+ config: main
98
+ split: test
99
+ args:
100
+ num_few_shot: 5
101
+ metrics:
102
+ - type: acc
103
+ value: 35.18
104
+ name: accuracy
105
+ source:
106
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=lemon07r/Gemma-2-Ataraxy-v3i-9B
107
+ name: Open LLM Leaderboard
108
+
109
+ ---
110
+
111
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
112
+
113
+
114
+ # QuantFactory/Gemma-2-Ataraxy-v3i-9B-GGUF
115
+ This is quantized version of [lemon07r/Gemma-2-Ataraxy-v3i-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v3i-9B) created using llama.cpp
116
+
117
+ # Original Model Card
118
+
119
+ # Gemma-2-Ataraxy-v3i-9B
120
+
121
+ Another experimental model. This one is in the vein of advanced 2.1, but we replace the simpo model used in the original recipe, with a different simpo model, that was more finetuned with writing in mind, ifable. We also use another writing model, which was trained on gutenberg. We use this one at a higher density because SPPO, on paper is the superior training method, to simpo, and quite frankly, ifable is finicky to work with, and can end up being a little too strong.. or heavy in merges. It's a very strong writer but it introduced quite a bit slop in v2.
122
+
123
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
124
+
125
+ ## GGUF
126
+
127
+ https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v3i-9B-Q8_0-GGUF
128
+
129
+ ## Merge Details
130
+ ### Merge Method
131
+
132
+ This model was merged using the della merge method using [unsloth/gemma-2-9b-it](https://huggingface.co/unsloth/gemma-2-9b-it) as a base.
133
+
134
+ ### Models Merged
135
+
136
+ The following models were included in the merge:
137
+ * [nbeerbower/Gemma2-Gutenberg-Doppel-9B](https://huggingface.co/nbeerbower/Gemma2-Gutenberg-Doppel-9B)
138
+ * [ifable/gemma-2-Ifable-9B](https://huggingface.co/ifable/gemma-2-Ifable-9B)
139
+ * [wzhouad/gemma-2-9b-it-WPO-HB](https://huggingface.co/wzhouad/gemma-2-9b-it-WPO-HB)
140
+
141
+ ### Configuration
142
+
143
+ The following YAML configuration was used to produce this model:
144
+
145
+ ```yaml
146
+ base_model: unsloth/gemma-2-9b-it
147
+ dtype: bfloat16
148
+ merge_method: della
149
+ parameters:
150
+ epsilon: 0.1
151
+ int8_mask: 1.0
152
+ lambda: 1.0
153
+ normalize: 1.0
154
+ slices:
155
+ - sources:
156
+ - layer_range: [0, 42]
157
+ model: unsloth/gemma-2-9b-it
158
+ - layer_range: [0, 42]
159
+ model: wzhouad/gemma-2-9b-it-WPO-HB
160
+ parameters:
161
+ density: 0.55
162
+ weight: 0.6
163
+ - layer_range: [0, 42]
164
+ model: nbeerbower/Gemma2-Gutenberg-Doppel-9B
165
+ parameters:
166
+ density: 0.35
167
+ weight: 0.6
168
+ - layer_range: [0, 42]
169
+ model: ifable/gemma-2-Ifable-9B
170
+ parameters:
171
+ density: 0.25
172
+ weight: 0.4
173
+ ```
174
+
175
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
176
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_lemon07r__Gemma-2-Ataraxy-v3i-9B)
177
+
178
+ | Metric |Value|
179
+ |-------------------|----:|
180
+ |Avg. |21.29|
181
+ |IFEval (0-Shot) |42.03|
182
+ |BBH (3-Shot) |38.24|
183
+ |MATH Lvl 5 (4-Shot)| 0.15|
184
+ |GPQA (0-shot) |10.40|
185
+ |MuSR (0-shot) | 1.76|
186
+ |MMLU-PRO (5-shot) |35.18|
187
+
188
+