nsfwthrowitaway69 commited on
Commit
e1cd85e
1 Parent(s): 4accbec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -1,3 +1,28 @@
1
  ---
2
  license: llama2
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: llama2
3
+ language:
4
+ - en
5
+ tags:
6
+ - not-for-all-audiences
7
  ---
8
+
9
+ # Venus 103b - version 1.0
10
+
11
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/655febd724e0d359c1f21096/BSKlxWQSbh-liU8kGz4fF.png)
12
+
13
+ ## Overview
14
+
15
+ A smaller version of Venus-120b that uses the same base models.
16
+
17
+ ## Model Details
18
+
19
+ - A result of interleaving layers of [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [NousResearch/Nous-Hermes-Llama2-70b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b), and [migtissera/SynthIA-70B-v1.5](https://huggingface.co/migtissera/SynthIA-70B-v1.5) using [mergekit](https://github.com/cg123/mergekit).
20
+ - The resulting model has 120 layers and approximately 103 billion parameters.
21
+ - See mergekit-config.yml for details on the merge method used.
22
+ - See the `exl2-*` branches for exllama2 quantizations. The 5.65 bpw quant should fit in 80GB VRAM, and the 3.35 bpw quant should fit in 48GB VRAM.
23
+
24
+ **Warning: This model will produce NSFW content!**
25
+
26
+ ## Results
27
+
28
+ Seems to be a bit more coherent than Venus-120b, likely due to using SynthIA 1.2b instead of SynthIA 1.5.