Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
TheBloke
/
LLaMa-65B-GPTQ
like
5
Text Generation
Transformers
Safetensors
llama
text-generation-inference
4-bit precision
gptq
License:
other
Model card
Files
Files and versions
Community
1
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (0)
I was literally just trying to do this and kept running short of vram by 1.5GB, thank you!
5
#1 opened over 1 year ago by
AARon99