Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Qwen
/
Qwen2-VL-72B-Instruct-AWQ
like
14
Image-Text-to-Text
Safetensors
English
qwen2_vl
multimodal
conversational
4-bit precision
awq
License:
tongyi-qianwen
Model card
Files
Files and versions
Community
2
Use this model
main
Qwen2-VL-72B-Instruct-AWQ
4 contributors
History:
8 commits
yangapku
fix(ckpt) fix corrupted ckpt file
712d5a5
4 days ago
.gitattributes
1.52 kB
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
LICENSE
6.96 kB
Create LICENSE
11 days ago
README.md
18.9 kB
Update README.md
8 days ago
added_tokens.json
392 Bytes
Upload folder using huggingface_hub
12 days ago
chat_template.json
1.05 kB
Upload folder using huggingface_hub
12 days ago
config.json
1.33 kB
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
generation_config.json
227 Bytes
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
merges.txt
1.67 MB
Upload folder using huggingface_hub
12 days ago
model-00001-of-00011.safetensors
3.97 GB
LFS
fix(ckpt) fix corrupted ckpt file
4 days ago
model-00002-of-00011.safetensors
3.91 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
model-00003-of-00011.safetensors
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
model-00004-of-00011.safetensors
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
model-00005-of-00011.safetensors
3.91 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
model-00006-of-00011.safetensors
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
model-00007-of-00011.safetensors
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
model-00008-of-00011.safetensors
3.91 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
model-00009-of-00011.safetensors
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
model-00010-of-00011.safetensors
3.99 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
model-00011-of-00011.safetensors
3.33 GB
LFS
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
model.safetensors.index.json
209 kB
fix(pad zero) pad intermediate_size to 29696 to make sure quantized model can use 8 tensor-parallel in vllm
5 days ago
preprocessor_config.json
594 Bytes
Upload folder using huggingface_hub
12 days ago
special_tokens_map.json
613 Bytes
Upload folder using huggingface_hub
12 days ago
tokenizer.json
7.03 MB
Upload folder using huggingface_hub
12 days ago
tokenizer_config.json
4.3 kB
Upload folder using huggingface_hub
12 days ago
vocab.json
2.78 MB
Upload folder using huggingface_hub
12 days ago