Edit model card

image/png

Mistral 7B v0.1 - GPTQ

The model published in this repo was quantized to 3bit using AutoGPTQ.

Quantization details

All quantization parameters were taken from GPTQ paper.

GPTQ calibration data consisted of 128 random 2048 token segments from the C4 dataset.

The grouping size used for quantization is equal to 64.

How to use this GPTQ model from Python code

Install the necessary packages

Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.

pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/

If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:

pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .

You can then use the following code


from transformers import AutoTokenizer, TextGenerationPipeline,AutoModelForCausalLM
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
pretrained_model_dir = "iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g64"
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True)
model = AutoGPTQForCausalLM.from_quantized(pretrained_model_dir, device="cuda:0", model_basename="model")
pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer)
print(pipeline("auto-gptq is")[0]["generated_text"])
Downloads last month
8
Safetensors
Model size
1.04B params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g64

Quantized
(168)
this model

Collection including iproskurina/Mistral-7B-v0.1-GPTQ-3bit-g64