Edit model card

MPT-30B-Instruct (4-bit 128g AWQ Quantized)

MPT-30B-Instruct is a model for short-form instruction following.

This model is a 4-bit 128 group size AWQ quantized model. For more information about AWQ quantization, please click here.

Model Date

July 5, 2023

Model License

Please refer to original MPT model license (link).

Please refer to the AWQ quantization license (link).

CUDA Version

This model was successfully tested on CUDA driver v530.30.02 and runtime v11.7 with Python v3.10.11. Please note that AWQ requires NVIDIA GPUs with compute capability of 8.0 or higher.

For Docker users, the nvcr.io/nvidia/pytorch:23.06-py3 image is runtime v12.1 but otherwise the same as the configuration above and has also been verified to work.

How to Use

git clone https://github.com/mit-han-lab/llm-awq \
&& cd llm-awq \
&& git checkout f084f40bd996f3cf3a0633c1ad7d9d476c318aaa \
&& pip install -e . \
&& cd awq/kernels \
&& python setup.py install
import time
import torch
from awq.quantize.quantizer import real_quantize_model_weight
from transformers import AutoModelForCausalLM, AutoConfig, AutoTokenizer, TextStreamer
from accelerate import init_empty_weights, load_checkpoint_and_dispatch
from huggingface_hub import snapshot_download

model_name = "abhinavkulkarni/mosaicml-mpt-30b-instruct-w4-g128-awq"

# Config
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)

# Tokenizer
try:
    tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name, trust_remote_code=True)
except:
    tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False, trust_remote_code=True)
streamer = TextStreamer(tokenizer, skip_special_tokens=True)

# Model
w_bit = 4
q_config = {
    "zero_point": True,
    "q_group_size": 128,
}

load_quant = snapshot_download(model_name)

with init_empty_weights():
    model = AutoModelForCausalLM.from_config(config=config, 
                                                 torch_dtype=torch.float16, trust_remote_code=True)

real_quantize_model_weight(model, w_bit=w_bit, q_config=q_config, init_only=True)
model.tie_weights()

model = load_checkpoint_and_dispatch(model, load_quant, device_map="balanced")

# Inference
prompt = f'''What is the difference between nuclear fusion and fission?
###Response:'''

input_ids = tokenizer(prompt, return_tensors='pt').input_ids.cuda()
output = model.generate(
    inputs=input_ids, 
    temperature=0.7,
    max_new_tokens=512,
    top_p=0.15,
    top_k=0,
    repetition_penalty=1.1,
    eos_token_id=tokenizer.eos_token_id,
    streamer=streamer)

Evaluation

This evaluation was done using LM-Eval.

MPT-30B-Instruct

Task Version Metric Value Stderr
wikitext 1 word_perplexity 11.3275
byte_perplexity 1.5744
bits_per_byte 0.6548

MPT-30B-Instruct (4-bit 128-group AWQ)

Task Version Metric Value Stderr
wikitext 1 word_perplexity 11.6058
byte_perplexity 1.5816
bits_per_byte 0.6614

Acknowledgements

The MPT model was originally finetuned by Sam Havens and the MosaicML NLP team. Please cite this model using the following format:

@online{MosaicML2023Introducing,
    author    = {MosaicML NLP Team},
    title     = {Introducing MPT-30B: A New Standard for Open-Source, Commercially Usable LLMs},
    year      = {2023},
    url       = {www.mosaicml.com/blog/mpt-30b},
    note      = {Accessed: 2023-03-28}, % change this date
    urldate   = {2023-03-28} % change this date
}

The model was quantized with AWQ technique. If you find AWQ useful or relevant to your research, please kindly cite the paper:

@article{lin2023awq,
  title={AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration},
  author={Lin, Ji and Tang, Jiaming and Tang, Haotian and Yang, Shang and Dang, Xingyu and Han, Song},
  journal={arXiv},
  year={2023}
}
Downloads last month
11
Inference Examples
Inference API (serverless) has been turned off for this model.