|
--- |
|
license: cc |
|
datasets: |
|
- VMware/open-instruct-v1-oasst-dolly-hhrlhf |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# SearchUnify-ML/xgen-7b-8k-open-instruct-gptq |
|
|
|
These are GPTQ 4bit model files for [VMWare's XGEN 7B 8K Open Instruct](https://huggingface.co/VMware/xgen-7b-8k-open-instruct). |
|
|
|
It is the result of quantising to 4bit using GPTQ-for-LLaMa. |
|
|
|
The model is open for COMMERCIAL USE. |
|
|
|
|
|
## How to use this GPTQ model from Python code |
|
|
|
First, make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed: |
|
|
|
#### pip install auto-gptq |
|
|
|
|
|
|
|
|
|
|
|
|
|
|