Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Qwen
/
Qwen2-VL-72B-Instruct-GPTQ-Int4
like
12
Follow
Qwen
3,564
Image-Text-to-Text
Safetensors
English
qwen2_vl
multimodal
conversational
4-bit precision
gptq
arxiv:
2409.12191
arxiv:
2308.12966
License:
tongyi-qianwen
Model card
Files
Files and versions
Community
1
Use this model
288c348
Qwen2-VL-72B-Instruct-GPTQ-Int4
3 contributors
History:
6 commits
可亲
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
288c348
2 months ago
.gitattributes
Safe
1.52 kB
initial commit
2 months ago
LICENSE
Safe
6.96 kB
Create LICENSE
2 months ago
README.md
Safe
18.9 kB
Update README.md
2 months ago
added_tokens.json
Safe
392 Bytes
Upload folder using huggingface_hub
2 months ago
chat_template.json
Safe
1.05 kB
Upload folder using huggingface_hub
2 months ago
config.json
Safe
1.39 kB
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
generation_config.json
Safe
247 Bytes
Upload folder using huggingface_hub
2 months ago
merges.txt
Safe
1.67 MB
Upload folder using huggingface_hub
2 months ago
model-00001-of-00011.safetensors
Safe
3.97 GB
LFS
Upload folder using huggingface_hub
2 months ago
model-00002-of-00011.safetensors
Safe
3.92 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
model-00003-of-00011.safetensors
Safe
4 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
model-00004-of-00011.safetensors
Safe
4 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
model-00005-of-00011.safetensors
Safe
3.92 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
model-00006-of-00011.safetensors
Safe
4 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
model-00007-of-00011.safetensors
Safe
4 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
model-00008-of-00011.safetensors
Safe
3.92 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
model-00009-of-00011.safetensors
Safe
4 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
model-00010-of-00011.safetensors
Safe
4 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
model-00011-of-00011.safetensors
Safe
3.33 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
model.safetensors.index.json
Safe
244 kB
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
2 months ago
preprocessor_config.json
Safe
594 Bytes
Upload folder using huggingface_hub
2 months ago
quantize_config.json
Safe
207 Bytes
Upload folder using huggingface_hub
2 months ago
special_tokens_map.json
Safe
613 Bytes
Upload folder using huggingface_hub
2 months ago
tokenizer.json
Safe
7.03 MB
Upload folder using huggingface_hub
2 months ago
tokenizer_config.json
Safe
4.3 kB
Upload folder using huggingface_hub
2 months ago
vocab.json
Safe
2.78 MB
Upload folder using huggingface_hub
2 months ago