DBMe commited on
Commit
2a14c3a
1 Parent(s): 382f3ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -18,6 +18,6 @@ For models with bits per weight (BPW) over 6.0, I default to quantizing the `lm_
18
  ---
19
 
20
  **Who are you? What's with these weird BPWs on [insert model here]?**
21
- I specialize in optimized EXL2 quantization for models in the 70B to 100B+ range, specifically tailored for 48GB VRAM setups. My rig features 2 x 3090s with a Ryzen APU (used solely for desktop output—no VRAM wasted on the GPUs). I use TabbyAPI for inference, targeting context sizes between 32K and 64K.
22
 
23
  Every model I upload includes a `config.yml` file with my ideal TabbyAPI settings. If you're using my config, don’t forget to set `PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync` to save some VRAM.
 
18
  ---
19
 
20
  **Who are you? What's with these weird BPWs on [insert model here]?**
21
+ I specialize in optimized EXL2 quantization for models in the 70B to 100B+ range, specifically tailored for 48GB VRAM setups. My rig is built using 2 x 3090s with a Ryzen APU (APU used solely for desktop output—no VRAM wasted on the 3090s). I use TabbyAPI for inference, targeting context sizes between 32K and 64K.
22
 
23
  Every model I upload includes a `config.yml` file with my ideal TabbyAPI settings. If you're using my config, don’t forget to set `PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync` to save some VRAM.