metadata
license: llama2
language:
- en
tags:
- not-for-all-audiences
Venus 120b - version 1.1
Overview
Version 1.1 of the Venus 120b lineup.
Model Details
- A result of interleaving layers of Sao10K/Euryale-1.3-L2-70B, Xwin-LM/Xwin-LM-70B-V0.1, and migtissera/SynthIA-70B-v1.5 using mergekit.
- The resulting model has 140 layers and approximately 122 billion parameters.
- See mergekit-config.yml for details on the merge method used.
- See the
exl2-*
branches for exllama2 quantizations. The 4.85 bpw quant should fit in 80GB VRAM, and the 3.0 bpw quant should (just barely) fit in 48GB VRAM with 4k context. - Inspired by Goliath-120b
Warning: This model will produce NSFW content!
Results
Seems to be more coherent than v1.0, likely due to using SynthIA 1.2b instead of 1.5.