Edit model card

Creation

from transformers import AutoProcessor, AutoModelForCausalLM

from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot, wrap_hf_model_class

MODEL_ID = "microsoft/Phi-3.5-vision-instruct"

# Load model.
model_class = wrap_hf_model_class(AutoModelForCausalLM)
model = model_class.from_pretrained(MODEL_ID, device_map="auto", torch_dtype="auto", trust_remote_code=True, _attn_implementation="eager")
processor = AutoProcessor.from_pretrained(MODEL_ID, trust_remote_code=True)

# Configure the quantization algorithm and scheme.
# In this case, we:
#   * quantize the weights to fp8 with per channel via ptq
#   * quantize the activations to fp8 with dynamic per token
recipe = QuantizationModifier(
    targets="Linear",
    scheme="FP8_DYNAMIC",
    ignore=["re:.*lm_head", "re:model.vision_embed_tokens.*"],
)

# Apply quantization and save to disk in compressed-tensors format.
SAVE_DIR = MODEL_ID.split("/")[1] + "-FP8-Dynamic"
oneshot(model=model, recipe=recipe, output_dir=SAVE_DIR)
processor.save_pretrained(SAVE_DIR)
Downloads last month
220
Safetensors
Model size
4.25B params
Tensor type
BF16
·
F8_E4M3
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for nm-testing/Phi-3.5-vision-instruct-FP8-dynamic

Quantized
(7)
this model