mav23 commited on
Commit
07ffa06
1 Parent(s): 61394ce

Upload folder using huggingface_hub

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. README.md +73 -0
  3. aspire1.2-8b-ties.Q4_0.gguf +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ aspire1.2-8b-ties.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - cgato/L3-TheSpice-8b-v0.8.3
4
+ - kloodia/lora-8b-medic
5
+ - NousResearch/Hermes-3-Llama-3.1-8B
6
+ - kloodia/lora-8b-physic
7
+ - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
8
+ - Blackroot/Llama-3-8B-Abomination-LORA
9
+ - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
10
+ - kloodia/lora-8b-bio
11
+ - NousResearch/Meta-Llama-3-8B
12
+ - DreadPoor/Nothing_to_see_here_-_Move_along
13
+ - hikikomoriHaven/llama3-8b-hikikomori-v0.4
14
+ - arcee-ai/Llama-3.1-SuperNova-Lite
15
+ - Blackroot/Llama3-RP-Lora
16
+ library_name: transformers
17
+ tags:
18
+ - mergekit
19
+ - merge
20
+
21
+ ---
22
+ # merge
23
+
24
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
25
+
26
+ ## Merge Details
27
+ ### Merge Method
28
+
29
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
30
+
31
+ ### Models Merged
32
+
33
+ The following models were included in the merge:
34
+ * [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3) + [kloodia/lora-8b-medic](https://huggingface.co/kloodia/lora-8b-medic)
35
+ * [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) + [kloodia/lora-8b-physic](https://huggingface.co/kloodia/lora-8b-physic)
36
+ * [ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1) + [Blackroot/Llama-3-8B-Abomination-LORA](https://huggingface.co/Blackroot/Llama-3-8B-Abomination-LORA)
37
+ * [Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2](https://huggingface.co/Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2) + [kloodia/lora-8b-bio](https://huggingface.co/kloodia/lora-8b-bio)
38
+ * [DreadPoor/Nothing_to_see_here_-_Move_along](https://huggingface.co/DreadPoor/Nothing_to_see_here_-_Move_along) + [hikikomoriHaven/llama3-8b-hikikomori-v0.4](https://huggingface.co/hikikomoriHaven/llama3-8b-hikikomori-v0.4)
39
+ * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) + [Blackroot/Llama3-RP-Lora](https://huggingface.co/Blackroot/Llama3-RP-Lora)
40
+
41
+ ### Configuration
42
+
43
+ The following YAML configuration was used to produce this model:
44
+
45
+ ```yaml
46
+ models:
47
+ - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2+kloodia/lora-8b-bio
48
+ parameters:
49
+ weight: 1
50
+ - model: arcee-ai/Llama-3.1-SuperNova-Lite+Blackroot/Llama3-RP-Lora
51
+ parameters:
52
+ weight: 1
53
+ - model: NousResearch/Hermes-3-Llama-3.1-8B+kloodia/lora-8b-physic
54
+ parameters:
55
+ weight: 1
56
+ - model: cgato/L3-TheSpice-8b-v0.8.3+kloodia/lora-8b-medic
57
+ parameters:
58
+ weight: 1
59
+ - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1+Blackroot/Llama-3-8B-Abomination-LORA
60
+ parameters:
61
+ weight: 1
62
+ - model: DreadPoor/Nothing_to_see_here_-_Move_along+hikikomoriHaven/llama3-8b-hikikomori-v0.4
63
+ parameters:
64
+ weight: 1
65
+
66
+ merge_method: ties
67
+ base_model: NousResearch/Meta-Llama-3-8B
68
+ parameters:
69
+ density: 1
70
+ normalize: true
71
+ int8_mask: true
72
+ dtype: bfloat16
73
+ ```
aspire1.2-8b-ties.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:21617ab55b82ada5c511d07ceafddc9ed1c9b6b5e37d68f7e1388f229394924b
3
+ size 4661214624