KatyTheCutie commited on
Commit
bd97c96
1 Parent(s): feecb83

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -56
README.md CHANGED
@@ -1,64 +1,28 @@
1
  ---
2
- base_model:
3
- - cgato/Thespis-CurtainCall-7b-v0.2.2
4
- - mistralai/Mistral-7B-v0.1
5
- - tavtav/eros-7b-test
6
- - cgato/Thespis-7b-v0.5-SFTTest-2Epoch
7
- - NeverSleep/Noromaid-7B-0.4-DPO
8
- - NurtureAI/neural-chat-7b-v3-1-16k
9
  tags:
10
- - mergekit
11
- - merge
12
- license: cc-by-nc-4.0
13
  ---
14
- # LemonadeRP-4.5.3
15
-
16
- This is a merge of pre-trained language models created by KatyTheCutie
17
-
18
- ## Merge Details
19
- ### Merge Method
20
 
21
- This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) as a base.
22
 
23
- ### Models Merged
 
 
24
 
25
- The following models were included in the merge:
26
- * [cgato/Thespis-CurtainCall-7b-v0.2.2](https://huggingface.co/cgato/Thespis-CurtainCall-7b-v0.2.2)
27
- * [tavtav/eros-7b-test](https://huggingface.co/tavtav/eros-7b-test)
28
- * [cgato/Thespis-7b-v0.5-SFTTest-2Epoch](https://huggingface.co/cgato/Thespis-7b-v0.5-SFTTest-2Epoch)
29
- * [NeverSleep/Noromaid-7B-0.4-DPO](https://huggingface.co/NeverSleep/Noromaid-7B-0.4-DPO)
30
- * [NurtureAI/neural-chat-7b-v3-1-16k](https://huggingface.co/NurtureAI/neural-chat-7b-v3-1-16k)
31
-
32
- ### Configuration
33
 
34
- The following YAML configuration was used to produce this model:
35
 
36
- ```yaml
37
- base_model: mistralai/Mistral-7B-v0.1
38
- dtype: float16
39
- merge_method: task_arithmetic
40
- slices:
41
- - sources:
42
- - layer_range: [0, 32]
43
- model: mistralai/Mistral-7B-v0.1
44
- - layer_range: [0, 32]
45
- model: NeverSleep/Noromaid-7B-0.4-DPO
46
- parameters:
47
- weight: 0.37
48
- - layer_range: [0, 32]
49
- model: cgato/Thespis-CurtainCall-7b-v0.2.2
50
- parameters:
51
- weight: 0.32
52
- - layer_range: [0, 32]
53
- model: NurtureAI/neural-chat-7b-v3-1-16k
54
- parameters:
55
- weight: 0.15
56
- - layer_range: [0, 32]
57
- model: cgato/Thespis-7b-v0.5-SFTTest-2Epoch
58
- parameters:
59
- weight: 0.38
60
- - layer_range: [0, 32]
61
- model: tavtav/eros-7b-test
62
- parameters:
63
- weight: 0.18
64
- ```
 
1
  ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
 
 
7
  tags:
8
+ - roleplay
 
 
9
  ---
10
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/653a2392341143f7774424d8/MtdRhFSBULJF4Upqch2gA.png)
11
+ Lemonade RP 0.1
 
 
 
 
12
 
13
+ 8192 context length.
14
 
15
+ 7B roleplay focused model, creativity and less cliché is the focus of this merge.
16
+ SillyTavern settings:
17
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/653a2392341143f7774424d8/tI2lp0Aeveu6KYBeNFilJ.png)
18
 
19
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/653a2392341143f7774424d8/lGhuJiL5jGRwviNr5GcbN.png)
20
+ Models used in merge:
21
+ - NeverSleep/Noromaid-7B-0.4-DPO
22
+ - cgato/Thespis-7b-v0.3-SFTTest-3Epoch 💛
23
+ - Undi95/Toppy-M-7B
24
+ - SanjiWatsuki/Kunoichi-7B
25
+ - Gryphe/MythoMist-7b
 
26
 
 
27
 
28
+ Feedback is always greatly appreciated! <3