---
license: cc-by-4.0
tags:
- requests
- gguf
- quantized
# > [!WARNING]
# > **Notice:**
# > Requests are paused at the moment due to unforseen circumstances.
---
> [!TIP]
> I apologize for disrupting your experience.
> My upload speeds have been cooked and unstable lately.
> I'd need to move to get a better provider.
> If you **want** and you are able to...
> [**You can support my various endeavors here (Ko-fi).**](https://ko-fi.com/Lewdiculous)
> In the meantime I'm also be working to make do with the resources at hand at the time.
![requests-banner/png](https://huggingface.co/Lewdiculous/Model-Requests/resolve/main/requests-banner.png)
# Welcome to my GGUF-IQ-Imatrix Model Quantization Requests card!
Please read everything.
This card is meant only to request GGUF-IQ-Imatrix quants for models that meet the requirements bellow.
**Requirements to request GGUF-Imatrix model quantizations:**
For the model:
- Maximum model parameter size of **11B**.
*At the moment I am unable to accept requests for larger models due to hardware/time limitations.*
*Preferably for Mistral based models in the creative/roleplay niche.*
Important:
- Fill the request template as outlined in the next section.
#### How to request a model quantization:
1. Open a [**New Discussion**](https://huggingface.co/Lewdiculous/Model-Requests/discussions/new) titled "`Request: Model-Author/Model-Name`", for example, "`Request: Nitral-AI/Infinitely-Laydiculous-7B`", without the quotation marks.
2. Include the following template in your post and fill the required information ([example request here](https://huggingface.co/Lewdiculous/Model-Requests/discussions/1)):
```
**[Required] Model name:**
**[Required] Model link:**
**[Required] Brief description:**
**[Required] An image/direct image link to represent the model (square shaped):**
**[Optional] Additonal quants (if you want any):**
Default list of quants for reference:
"IQ3_M", "IQ3_XXS",
"Q4_K_M", "Q4_K_S", "IQ4_NL", "IQ4_XS",
"Q5_K_M", "Q5_K_S",
"Q6_K",
"Q8_0"
```