Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Granther
/
Gemma-2-9B-Instruct-4Bit-GPTQ
like
3
Text Generation
Transformers
Safetensors
gemma2
gptq
conversational
text-generation-inference
4-bit precision
License:
gemma
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
Edit model card
Gemma-2-9B-Instruct-4Bit-GPTQ
Quantization
Metrics
Gemma-2-9B-Instruct-4Bit-GPTQ
Original Model:
gemma-2-9b-it
Model Creator:
google
Quantization
This model was quantized with the Auto-GPTQ library
Metrics
Benchmark
Metric
Gemma 2 GPTQ
Gemma 2 9B it
PIQA
0-shot
80.52
80.79
MMLU
5-shot
52.0
50.00
Downloads last month
634
Safetensors
Model size
2.03B params
Tensor type
I32
·
FP16
·
Inference Examples
Text Generation
Inference API (serverless) has been turned off for this model.
Model tree for
Granther/Gemma-2-9B-Instruct-4Bit-GPTQ
Base model
google/gemma-2-9b
Finetuned
google/gemma-2-9b-it
Quantized
(
93
)
this model