InferenceIllusionist's picture
Update README.md
77e2bca verified
metadata
license: cc-by-nc-4.0
tags:
  - not-for-all-audiences
  - mixtral
  - conversational
  - gguf
  - iMat

CybersurferNyandroidLexicat-8x7B-iMat-GGUF

CybersurferNyandroidLexicat quantized from fp16 with love.

Uses the same imat calculation method as the later batch of maid-yuzu-v8-alter-iMat-GGUF.

Legacy quants (i.e. Q5_K_M, Q6_K, etc) in this repo have all been enhanced with importance matrix calculation. These quants show improved KL-Divergence over their static counterparts.

All files included here for your convenience. No need to clone the entire repo, just pick the quant that's right for you.

For more information on latest iMatrix quants see this PR - https://github.com/ggerganov/llama.cpp/pull/5747

Tip: The letter at the end of the quant name indicates its size. Larger sizes have better quality, smaller sizes are faster.

  • IQ3_XS - XS (Extra Small)
  • IQ3_S - S (Small)
  • IQ3_M - M (medium)