SanjiWatsuki commited on
Commit
23fb737
1 Parent(s): 8e844ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -2,6 +2,8 @@
2
  license: cc-by-nc-4.0
3
  tags:
4
  - merge
 
 
5
  ---
6
 
7
  ![image/png](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/resolve/main/macaroni-maid.jpg)
@@ -104,6 +106,4 @@ The DARE TIES merger is intentionally overweight and non-normalized at 1.3 total
104
 
105
  Putting it all together, ~60% of the model is "base models" like OpenChat/NeuralChat/Loyal-Piano-M7. ~40% of the model is effectively me trying to extract RP information from existing RP models. The only non-RP model is the Marcoroni base which means that almost 80% of this model is intended for RP.
106
 
107
- Not that the benchmarks matter, but if this merger works right, it'll be a high benchmarking 7B that is both smart and strong at RP.
108
-
109
-
 
2
  license: cc-by-nc-4.0
3
  tags:
4
  - merge
5
+ - not-for-all-audiences
6
+ - nsfw
7
  ---
8
 
9
  ![image/png](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B/resolve/main/macaroni-maid.jpg)
 
106
 
107
  Putting it all together, ~60% of the model is "base models" like OpenChat/NeuralChat/Loyal-Piano-M7. ~40% of the model is effectively me trying to extract RP information from existing RP models. The only non-RP model is the Marcoroni base which means that almost 80% of this model is intended for RP.
108
 
109
+ Not that the benchmarks matter, but if this merger works right, it'll be a high benchmarking 7B that is both smart and strong at RP.