Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Qwen
/
Qwen2.5-1.5B-Instruct-GPTQ-Int4
like
0
Follow
Qwen
3,531
Text Generation
Transformers
Safetensors
English
qwen2
chat
conversational
text-generation-inference
Inference Endpoints
4-bit precision
gptq
arxiv:
2407.10671
License:
apache-2.0
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (1)
Why does this model take up more memory than the 17B one
#2 opened about 1 month ago by
hhgz