Not-For-All-Audiences
Edit model card

Quantized using 200 samples of 8192 tokens from an RP-oriented PIPPA dataset.

Branches:

  • main -- measurement.json
  • 2.25b6h -- 2.25bpw, 6bit lm_head
  • 3.5b6h -- 3.5bpw, 6bit lm_head
  • 3.7b6h -- 3.7bpw, 6bit lm_head
  • 6b6h -- 6bpw, 6bit lm_head

Requires ExllamaV2 version 0.0.11 and up.

Original model link: Envoid/CybersurferNyandroidLexicat-8x7B

Original model README below.


Warning: This model is experimental and unpredictable

CybersurferNyandroidLexicat-8x7B (I was in a silly mood when I made this edition)

Is a linear merge of the following models:

Verdict-DADA-8x7B 60%

crestf411/daybreak-mixtral-8x7b-v1.0-hf 30%

Experimental unreleased merge 10%

I find its output as an assistant to be less dry and it is stable and imaginative in brief roleplay testing. Tested with simple sampling, requires rerolls but when it's good it's good. I can't say how well it will be when the context fills up but I was pleasantly surprised.

It definitely has one of the most varied lexicons out of any Mixtral Instruct based model I've tested so far with excellent attention to detail with respect to context.

It likes Libra styled prompt formats with [INST] context [/INST] formatting

Which can easily be adapted from the format specified in the Libra32B repo by replacing alpaca formatting with mixtruct formatting

As always tested in Q8 (not included)

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .