Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
afrideva
/
Mixtral-GQA-400m-v2-GGUF
like
1
Text Generation
GGUF
English
ggml
quantized
q2_k
q3_k_m
q4_k_m
q5_k_m
q6_k
q8_0
conversational
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Use this model
main
Mixtral-GQA-400m-v2-GGUF
1 contributor
History:
9 commits
afrideva
Upload README.md with huggingface_hub
b460d08
11 months ago
.gitattributes
1.99 kB
Upload mixtral-gqa-400m-v2.q8_0.gguf with huggingface_hub
11 months ago
README.md
2.34 kB
Upload README.md with huggingface_hub
11 months ago
mixtral-gqa-400m-v2.fp16.gguf
4.01 GB
LFS
Upload mixtral-gqa-400m-v2.fp16.gguf with huggingface_hub
11 months ago
mixtral-gqa-400m-v2.q2_k.gguf
703 MB
LFS
Upload mixtral-gqa-400m-v2.q2_k.gguf with huggingface_hub
11 months ago
mixtral-gqa-400m-v2.q3_k_m.gguf
900 MB
LFS
Upload mixtral-gqa-400m-v2.q3_k_m.gguf with huggingface_hub
11 months ago
mixtral-gqa-400m-v2.q4_k_m.gguf
1.15 GB
LFS
Upload mixtral-gqa-400m-v2.q4_k_m.gguf with huggingface_hub
11 months ago
mixtral-gqa-400m-v2.q5_k_m.gguf
1.39 GB
LFS
Upload mixtral-gqa-400m-v2.q5_k_m.gguf with huggingface_hub
11 months ago
mixtral-gqa-400m-v2.q6_k.gguf
1.65 GB
LFS
Upload mixtral-gqa-400m-v2.q6_k.gguf with huggingface_hub
11 months ago
mixtral-gqa-400m-v2.q8_0.gguf
2.13 GB
LFS
Upload mixtral-gqa-400m-v2.q8_0.gguf with huggingface_hub
11 months ago