Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ For models with bits per weight (BPW) over 6.0, I default to quantizing the `lm_
|
|
17 |
|
18 |
---
|
19 |
|
20 |
-
**Who are you? What's with these BPWs on
|
21 |
I specialize in optimized EXL2 quantization for models in the 70B to 100B+ range, specifically tailored for 48GB VRAM setups. My rig features 2 x 3090s with a Ryzen APU (used solely for desktop output—no VRAM wasted on the GPUs). I use TabbyAPI for inference, targeting context sizes between 32K and 64K.
|
22 |
|
23 |
Every model I upload includes a `config.yml` file with my ideal TabbyAPI settings. If you're using my config, don’t forget to set `PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync` to save some VRAM.
|
|
|
17 |
|
18 |
---
|
19 |
|
20 |
+
**Who are you? What's with these BPWs on [insert model here]?**
|
21 |
I specialize in optimized EXL2 quantization for models in the 70B to 100B+ range, specifically tailored for 48GB VRAM setups. My rig features 2 x 3090s with a Ryzen APU (used solely for desktop output—no VRAM wasted on the GPUs). I use TabbyAPI for inference, targeting context sizes between 32K and 64K.
|
22 |
|
23 |
Every model I upload includes a `config.yml` file with my ideal TabbyAPI settings. If you're using my config, don’t forget to set `PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync` to save some VRAM.
|