File size: 1,133 Bytes
fc98287
 
e1cd85e
 
 
 
fc98287
e1cd85e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
license: llama2
language:
  - en
tags:
  - not-for-all-audiences
---

# Venus 103b - version 1.0

![image/png](https://cdn-uploads.huggingface.co/production/uploads/655febd724e0d359c1f21096/BSKlxWQSbh-liU8kGz4fF.png)

## Overview

A smaller version of Venus-120b that uses the same base models.

## Model Details

- A result of interleaving layers of [Sao10K/Euryale-1.3-L2-70B](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [NousResearch/Nous-Hermes-Llama2-70b](https://huggingface.co/NousResearch/Nous-Hermes-Llama2-70b), and [migtissera/SynthIA-70B-v1.5](https://huggingface.co/migtissera/SynthIA-70B-v1.5) using [mergekit](https://github.com/cg123/mergekit).
- The resulting model has 120 layers and approximately 103 billion parameters.
- See mergekit-config.yml for details on the merge method used.
- See the `exl2-*` branches for exllama2 quantizations. The 5.65 bpw quant should fit in 80GB VRAM, and the 3.35 bpw quant should fit in 48GB VRAM.

**Warning: This model will produce NSFW content!**

## Results

Seems to be a bit more coherent than Venus-120b, likely due to using SynthIA 1.2b instead of SynthIA 1.5.