MaziyarPanahi's picture
Upload folder using huggingface_hub
1331ea5 verified
metadata
tags:
  - finetuned
  - quantized
  - 4-bit
  - AWQ
  - transformers
  - pytorch
  - safetensors
  - mistral
  - text-generation
  - finetuned
  - conversational
  - arxiv:2310.06825
  - license:apache-2.0
  - autotrain_compatible
  - has_space
  - text-generation-inference
  - region:us
model_name: Mistral-7B-Instruct-v0.1-AWQ
base_model: mistralai/Mistral-7B-Instruct-v0.1
inference: false
model_creator: mistralai
pipeline_tag: text-generation
quantized_by: MaziyarPanahi

Description

MaziyarPanahi/Mistral-7B-Instruct-v0.1-AWQ is a quantized (AWQ) version of mistralai/Mistral-7B-Instruct-v0.1

How to use

Install the necessary packages

pip install --upgrade accelerate autoawq transformers

Example Python code

from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "MaziyarPanahi/Mistral-7B-Instruct-v0.1-AWQ"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id).to(0)

text = "User:\nHello can you provide me with top-3 cool places to visit in Paris?\n\nAssistant:\n"
inputs = tokenizer(text, return_tensors="pt").to(0)

out = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(out[0], skip_special_tokens=True))