nsfwthrowitaway69
commited on
Commit
•
d7160cb
1
Parent(s):
c4568e5
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,31 @@
|
|
1 |
---
|
2 |
license: llama2
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: llama2
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
tags:
|
6 |
+
- not-for-all-audiences
|
7 |
---
|
8 |
+
|
9 |
+
# Venus 103b - version 1.2
|
10 |
+
|
11 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/655febd724e0d359c1f21096/BSKlxWQSbh-liU8kGz4fF.png)
|
12 |
+
|
13 |
+
## Model Details
|
14 |
+
|
15 |
+
- A result of interleaving layers of [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B) and [GOAT-AI/GOAT-70B-Storytelling](https://huggingface.co/GOAT-AI/GOAT-70B-Storytelling)
|
16 |
+
- The resulting model has 120 layers and 103 billion parameters.
|
17 |
+
- See mergekit-config.yml for details on the merge method used.
|
18 |
+
- See the `exl2-*` branches for exllama2 quantizations. The 5.65 bpw quant should fit in 80GB VRAM, and the 3.35/3.0 bpw quants should fit in 48GB VRAM.
|
19 |
+
- Inspired by [Goliath-120b](https://huggingface.co/alpindale/goliath-120b)
|
20 |
+
|
21 |
+
**Warning: This model will produce NSFW content!**
|
22 |
+
|
23 |
+
## Results
|
24 |
+
|
25 |
+
1. In my limited testing, I've found this model to be the most creative of the 103b merges I've made so far.
|
26 |
+
2. Seems to tolerate higher temperatures than the previous Venus models
|
27 |
+
3. Doesn't seem to suffer from any censorship issues
|
28 |
+
4. Does not follow instructions as well as v1.1, but still does a bit better than v1.0
|
29 |
+
5. Has some issues with formatting sometimes (i.e not closing asterisks or quotes)
|
30 |
+
|
31 |
+
Note that these are obviously just my personal observations, everyone will have their own unique experience based on their settings and specific scenarios they're using the model for.
|