JayhC's picture
Update README.md
d8529fa verified
metadata
license: llama3
tags:
  - moe
language:
  - en



8bpw/h8 exl2 quantization of xxx777xxxASD/L3-ChaoticSoliloquy-v1.5-4x8B using default exllamav2 calibration dataset.


ORIGINAL CARD:

image/png

GGUF / Exl2 quants

Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks. Im not sure but it should be better than the first version

Llama 3 ChaoticSoliloquy-v1.5-4x8B

base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
  - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.7-L3-8B
  - source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1
  - source_model: openlynn_Llama-3-Soliloquy-8B
  - source_model: Sao10K_L3-Solana-8B-v1

Models used

Difference

Vision

llama3_mmproj

image/png

Prompt format: Llama 3