YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
4-bit quant of llama part of llava https://github.com/haotian-liu/LLaVA https://huggingface.co/liuhaotian/LLaVA-7b-delta-v0
quantized by:
CUDA_VISIBLE_DEVICES=0 python llama.py /workspace/LLaVA-7B-v0/ c4 --wbits 4 --true-sequential --groupsize 128 --save_safetensors llava-7b-v0-4bit-128g.safetensors
on https://github.com/oobabooga/GPTQ-for-LLaMa CUDA branch of GPTQ (commit 57a2629
)
NOT COMPATIBILE WITH TEXT-GENERATION-WEBUI YET
(multimodality isn't, text inference is) waiting for this PR: https://github.com/oobabooga/text-generation-webui/pull/1741
license: other
- Downloads last month
- 21
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.