Edit model card

Mythalion 13B - ExLlamaV2

Original model: mythalion-13b

Description

This is my trial of quantization. I use only RP dataset for calibration, it may cause the model to not perform as well in other situations. But people who use Mythalion basically use it for RP, I guess?

Anyway, it works on RP. I haven't tested it's performance in other situations. ExLlamaV2 is great.

2.30 bpw is designed for 8GB VRAM. It is more extreme and can only up to 2048 context. If your VRAM is occupied by other program or system, lower the allowed context.

I wouldn't use it though because its performance so poor compared to 4 and 6bpw. It's just can work. I shared it and maybe someone needed it.

Downloads last month
16
Inference Examples
Inference API (serverless) has been turned off for this model.

Dataset used to train Eigeen/mythalion-13b-2.30bpw-h4-exl2