Initial release
Browse files- .gitattributes +1 -0
- Magnolia-v1-12B.Q4_K_M.gguf +3 -0
- Magnolia-v1-12B.Q5_K_M.gguf +3 -0
- Magnolia-v1-12B.Q6_K.gguf +3 -0
- Magnolia-v1-12B.Q8_0.gguf +3 -0
- README.md +55 -0
.gitattributes
CHANGED
@@ -4,6 +4,7 @@
|
|
4 |
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
*.ftz filter=lfs diff=lfs merge=lfs -text
|
|
|
7 |
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
*.joblib filter=lfs diff=lfs merge=lfs -text
|
|
|
4 |
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gguf filter=lfs diff=lfs merge=lfs -text
|
8 |
*.gz filter=lfs diff=lfs merge=lfs -text
|
9 |
*.h5 filter=lfs diff=lfs merge=lfs -text
|
10 |
*.joblib filter=lfs diff=lfs merge=lfs -text
|
Magnolia-v1-12B.Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:76c310b5326107d86216dc48ab206f2e2fdde055907c518b15e74c634b097c5e
|
3 |
+
size 7477204064
|
Magnolia-v1-12B.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2095e101b7f3fa992a6cfc33403360e66368a13bbe7d99d8fb435146f56c94b4
|
3 |
+
size 8727630944
|
Magnolia-v1-12B.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8d79850a5f3b5d5acf4196b6f967feaab9cab97712ace3430a1a6e92aa4f8582
|
3 |
+
size 10056209504
|
Magnolia-v1-12B.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:507d73c81da3f3f2316d6251ac49b3a374be7b7492f5b8d10f3011f05e583cda
|
3 |
+
size 13022368864
|
README.md
ADDED
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- grimjim/magnum-consolidatum-v1-12b
|
4 |
+
- grimjim/mistralai-Mistral-Nemo-Instruct-2407
|
5 |
+
library_name: transformers
|
6 |
+
tags:
|
7 |
+
- mergekit
|
8 |
+
- merge
|
9 |
+
pipeline_tag: text-generation
|
10 |
+
license: apache-2.0
|
11 |
+
quanted_by: grimjim
|
12 |
+
---
|
13 |
+
# Magnolia-v1-12B-GGUF
|
14 |
+
|
15 |
+
This repo contains a GGUF quants of a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
16 |
+
The base is a merge of two models trained for variety in text generation.
|
17 |
+
Instruct was added in at low weight in order to increase the steerability of the model;
|
18 |
+
safety has consequently been reinforced.
|
19 |
+
|
20 |
+
Tested at temperature 0.7 and minP 0.01, with ChatML prompting.
|
21 |
+
|
22 |
+
Mistral Nemo models tend to have repetition issues in general. For this model at least, various issues can be mitigated somewhat with additional sysprompting, e.g.:
|
23 |
+
```
|
24 |
+
No passage shall exceed 10 lines of text, with turns limited to a maximum of 5 lines per speaker to ensure snappy and engaging dialog and action.
|
25 |
+
Ensure that all punctuation rules are adhered to without the introduction of spurious intervening spaces.
|
26 |
+
Avoid redundant phrasing and maintain forward narrative progression by utilizing varied sentence structure, alternative word choices, and active voice.
|
27 |
+
Employ descriptive details judiciously, ensuring they serve a purpose in advancing the story or revealing character or touching upon setting.
|
28 |
+
```
|
29 |
+
|
30 |
+
## Merge Details
|
31 |
+
### Merge Method
|
32 |
+
|
33 |
+
This model was merged using the SLERP merge method.
|
34 |
+
|
35 |
+
### Models Merged
|
36 |
+
|
37 |
+
The following models were included in the merge:
|
38 |
+
* [grimjim/magnum-consolidatum-v1-12b](https://huggingface.co/grimjim/magnum-consolidatum-v1-12b)
|
39 |
+
* [grimjim/mistralai-Mistral-Nemo-Instruct-2407](https://huggingface.co/grimjim/mistralai-Mistral-Nemo-Instruct-2407)
|
40 |
+
|
41 |
+
### Configuration
|
42 |
+
|
43 |
+
The following YAML configuration was used to produce this model:
|
44 |
+
|
45 |
+
```yaml
|
46 |
+
models:
|
47 |
+
- model: grimjim/mistralai-Mistral-Nemo-Instruct-2407
|
48 |
+
- model: grimjim/magnum-consolidatum-v1-12b
|
49 |
+
merge_method: slerp
|
50 |
+
base_model: grimjim/mistralai-Mistral-Nemo-Instruct-2407
|
51 |
+
parameters:
|
52 |
+
t:
|
53 |
+
- value: 0.1
|
54 |
+
dtype: bfloat16
|
55 |
+
```
|