Triangle104 commited on
Commit
5be3149
1 Parent(s): 3ff1a19

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -1
README.md CHANGED
@@ -6,12 +6,37 @@ tags:
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
 
9
  ---
10
 
11
  # Triangle104/Mahou-1.5-mistral-nemo-12B-lorablated-Q5_K_S-GGUF
12
  This model was converted to GGUF format from [`nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated`](https://huggingface.co/nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
13
  Refer to the [original model card](https://huggingface.co/nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated) for more details on the model.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ## Use with llama.cpp
16
  Install llama.cpp through brew (works on Mac and Linux)
17
 
@@ -50,4 +75,4 @@ Step 3: Run inference through the main binary.
50
  or
51
  ```
52
  ./llama-server --hf-repo Triangle104/Mahou-1.5-mistral-nemo-12B-lorablated-Q5_K_S-GGUF --hf-file mahou-1.5-mistral-nemo-12b-lorablated-q5_k_s.gguf -c 2048
53
- ```
 
6
  - merge
7
  - llama-cpp
8
  - gguf-my-repo
9
+ license: apache-2.0
10
  ---
11
 
12
  # Triangle104/Mahou-1.5-mistral-nemo-12B-lorablated-Q5_K_S-GGUF
13
  This model was converted to GGUF format from [`nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated`](https://huggingface.co/nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
14
  Refer to the [original model card](https://huggingface.co/nbeerbower/Mahou-1.5-mistral-nemo-12B-lorablated) for more details on the model.
15
 
16
+ ---
17
+ Model details:
18
+ -
19
+ This model was merged using the task arithmetic merge method using flammenai/Mahou-1.5-mistral-nemo-12B + nbeerbower/Mistral-Nemo-12B-abliterated-LORA as a base.
20
+ Models Merged
21
+
22
+ The following models were included in the merge:
23
+ Configuration
24
+
25
+ The following YAML configuration was used to produce this model:
26
+
27
+ base_model: flammenai/Mahou-1.5-mistral-nemo-12B+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
28
+ dtype: bfloat16
29
+ merge_method: task_arithmetic
30
+ parameters:
31
+ normalize: false
32
+ slices:
33
+ - sources:
34
+ - layer_range: [0, 32]
35
+ model: flammenai/Mahou-1.5-mistral-nemo-12B+nbeerbower/Mistral-Nemo-12B-abliterated-LORA
36
+ parameters:
37
+ weight: 1.0
38
+
39
+ ---
40
  ## Use with llama.cpp
41
  Install llama.cpp through brew (works on Mac and Linux)
42
 
 
75
  or
76
  ```
77
  ./llama-server --hf-repo Triangle104/Mahou-1.5-mistral-nemo-12B-lorablated-Q5_K_S-GGUF --hf-file mahou-1.5-mistral-nemo-12b-lorablated-q5_k_s.gguf -c 2048
78
+ ```