File size: 1,169 Bytes
7d5fb59 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
---
license: other
license_name: mrl
language:
- en
tags:
- chat
pipeline_tag: text-generation
library_name: transformers
---
Quantized model => https://huggingface.co/anthracite-org/magnum-v4-123b
**Quantization Details:**
Quantization is done using turboderp's ExLlamaV2 v0.2.3.
I use the default calibration datasets and arguments. The repo also includes a "measurement.json" file, which was used during the quantization process.
For models with bits per weight (BPW) over 6.0, I default to quantizing the `lm_head` layer at 8 bits instead of the standard 6 bits.
---
**Who are you? What's with these weird BPWs on [insert model here]?**
I specialize in optimized EXL2 quantization for models in the 70B to 100B+ range, specifically tailored for 48GB VRAM setups. My rig is built using 2 x 3090s with a Ryzen APU (APU used solely for desktop output—no VRAM wasted on the 3090s). I use TabbyAPI for inference, targeting context sizes between 32K and 64K.
Every model I upload includes a `config.yml` file with my ideal TabbyAPI settings. If you're using my config, don’t forget to set `PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync` to save some VRAM.
|