Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.
Branches:
main
--measurement.json
2.25b6h
-- 2.25bpw, 6bit lm_head3.5b6h
-- 3.5bpw, 6bit lm_head3.7b6h
-- 3.7bpw, 6bit lm_head6b6h
-- 6bpw, 6bit lm_head
Requires ExllamaV2 version 0.0.11 and up.
Original model link: Envoid/CybersurferNyandroidLexicat-8x7B
Original model README below.
Warning: This model is experimental and unpredictable
CybersurferNyandroidLexicat-8x7B (I was in a silly mood when I made this edition)
Is a linear merge of the following models:
crestf411/daybreak-mixtral-8x7b-v1.0-hf 30%
Experimental unreleased merge 10%
I find its output as an assistant to be less dry and it is stable and imaginative in brief roleplay testing. Tested with simple sampling, requires rerolls but when it's good it's good. I can't say how well it will be when the context fills up but I was pleasantly surprised.
It definitely has one of the most varied lexicons out of any Mixtral Instruct based model I've tested so far with excellent attention to detail with respect to context.