Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
xaskasdf
/
phi-3-mini-4k-instruct-gguf
like
0
GGUF
Inference Endpoints
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
main
phi-3-mini-4k-instruct-gguf
1 contributor
History:
4 commits
xaskasdf
Upload extended quantizations
c345b8e
verified
3 months ago
.gitattributes
Safe
2.76 kB
Upload extended quantizations
3 months ago
phi-3-mini-4k-instruct-bf16.gguf
Safe
7.64 GB
LFS
Upload extended quantizations
3 months ago
phi-3-mini-4k-instruct-f16.gguf
Safe
7.64 GB
LFS
Upload converted and quantized model weights
3 months ago
phi-3-mini-4k-instruct-f32.gguf
Safe
15.1 GB
LFS
Upload extended quantizations
3 months ago
phi-3-mini-4k-instruct-iq2_m.gguf
Safe
1.45 GB
LFS
Upload importance matrix, heavily quantized model weights and q2
3 months ago
phi-3-mini-4k-instruct-iq2_s.gguf
Safe
1.35 GB
LFS
Upload importance matrix, heavily quantized model weights and q2
3 months ago
phi-3-mini-4k-instruct-iq2_xs.gguf
Safe
1.28 GB
LFS
Upload importance matrix, heavily quantized model weights and q2
3 months ago
phi-3-mini-4k-instruct-iq2_xxs.gguf
Safe
1.17 GB
LFS
Upload importance matrix, heavily quantized model weights and q2
3 months ago
phi-3-mini-4k-instruct-iq4_nl.gguf
Safe
2.29 GB
LFS
Upload importance matrix, heavily quantized model weights and q2
3 months ago
phi-3-mini-4k-instruct-iq4_xs.gguf
Safe
2.18 GB
LFS
Upload importance matrix, heavily quantized model weights and q2
3 months ago
phi-3-mini-4k-instruct-q2_k.gguf
Safe
1.53 GB
LFS
Upload importance matrix, heavily quantized model weights and q2
3 months ago
phi-3-mini-4k-instruct-q3_k.gguf
Safe
2.07 GB
LFS
Upload converted and quantized model weights
3 months ago
phi-3-mini-4k-instruct-q4_0.gguf
Safe
2.29 GB
LFS
Upload converted and quantized model weights
3 months ago
phi-3-mini-4k-instruct-q4_k.gguf
Safe
2.51 GB
LFS
Upload converted and quantized model weights
3 months ago
phi-3-mini-4k-instruct-q5_0.gguf
Safe
2.76 GB
LFS
Upload converted and quantized model weights
3 months ago
phi-3-mini-4k-instruct-q5_k.gguf
Safe
2.93 GB
LFS
Upload converted and quantized model weights
3 months ago
phi-3-mini-4k-instruct-q6_k.gguf
Safe
3.25 GB
LFS
Upload converted and quantized model weights
3 months ago
phi-3-mini-4k-instruct-q8_0.gguf
Safe
4.15 GB
LFS
Upload converted and quantized model weights
3 months ago
phi-3-mini-4k.imatrix
Safe
2.23 MB
LFS
Upload importance matrix, heavily quantized model weights and q2
3 months ago