aashish1904 commited on
Commit
7aa4f5b
1 Parent(s): 5ec09cc

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ base_model:
5
+ - v000000/Qwen2.5-14B-Gutenberg-1e-Delta
6
+ - Qwen/Qwen2.5-14B-Instruct
7
+ library_name: transformers
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+ - qwen2
12
+ - qwen2.5
13
+ - dpo
14
+ license: apache-2.0
15
+ datasets:
16
+ - jondurbin/gutenberg-dpo-v0.1
17
+
18
+ ---
19
+
20
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
21
+
22
+
23
+ # QuantFactory/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno-GGUF
24
+ This is quantized version of [v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno](https://huggingface.co/v000000/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno) created using llama.cpp
25
+
26
+ # Original Model Card
27
+
28
+
29
+ # Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
30
+
31
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/PgoZ5eutiHDfBmuoBuDO9.png)
32
+
33
+ --------------------------------------------------------------------------
34
+
35
+ ## GGUF from mradermacher!
36
+
37
+ * [GGUF static](https://huggingface.co/mradermacher/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno-GGUF)
38
+
39
+ * [GGUF Imatrix](https://huggingface.co/mradermacher/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno-i1-GGUF)
40
+
41
+ # merge
42
+
43
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
44
+
45
+ ## Merge Details
46
+ ### Merge Method
47
+
48
+ This model was merged using the SLERP merge method. (*sophosympatheia gradient*)
49
+
50
+ ### Models Merged
51
+
52
+ The following models were included in the merge:
53
+ * [v000000/Qwen2.5-14B-Gutenberg-1e-Delta](https://huggingface.co/v000000/Qwen2.5-14B-Gutenberg-1e-Delta)
54
+ * [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
55
+
56
+ ### Configuration
57
+
58
+ The following YAML configuration was used to produce this model:
59
+
60
+ ```yaml
61
+ models:
62
+ - model: Qwen/Qwen2.5-14B-Instruct
63
+ merge_method: slerp
64
+ base_model: v000000/Qwen2.5-14B-Gutenberg-1e-Delta
65
+ parameters:
66
+ t:
67
+ - value: [0, 0, 0.3, 0.4, 0.5, 0.6, 0.5, 0.4, 0.3, 0, 0]
68
+ dtype: bfloat16
69
+ ```
70
+
71
+ *The idea here is that Gutenberg DPO stays in the output/input 100% while merging smoothly with the base instruct model in the deeper layers to heal loss and increase intelligence.*