Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
TheBloke
/
Mixtral-8x7B-v0.1-GGUF
like
423
Transformers
GGUF
5 languages
mixtral
License:
apache-2.0
Model card
Files
Files and versions
Community
21
Train
Deploy
Use this model
6348bb4
Mixtral-8x7B-v0.1-GGUF
1 contributor
History:
26 commits
TheBloke
GGUF model commit (made with llama.cpp commit 8a7b2fa)
6348bb4
11 months ago
.gitattributes
Safe
2.49 kB
Rename mixtral-8x7b-v0.1.Q4_K_M.gguf to mixtral-8x7b-v0.1.Q4_K.gguf
11 months ago
README.md
Safe
16.3 kB
Update README.md
11 months ago
config.json
Safe
31 Bytes
GGUF model commit (made with llama.cpp commit 8a7b2fa)
11 months ago
mixtral-8x7b-v0.1.Q2_K.gguf
Safe
15.6 GB
LFS
GGUF model commit (made with llama.cpp commit 8a7b2fa)
11 months ago
mixtral-8x7b-v0.1.Q3_K.gguf
Safe
20.4 GB
LFS
Rename mixtral-8x7b-v0.1.Q3_K_L.gguf to mixtral-8x7b-v0.1.Q3_K.gguf
11 months ago
mixtral-8x7b-v0.1.Q4_0.gguf
Safe
26.4 GB
LFS
GGUF model commit (made with llama.cpp commit 8a7b2fa)
11 months ago
mixtral-8x7b-v0.1.Q4_K.gguf
Safe
26.4 GB
LFS
GGUF model commit (made with llama.cpp commit 8a7b2fa)
11 months ago
mixtral-8x7b-v0.1.Q5_0.gguf
Safe
32.2 GB
LFS
GGUF model commit (made with llama.cpp commit 8a7b2fa)
11 months ago
mixtral-8x7b-v0.1.Q5_K.gguf
Safe
32.2 GB
LFS
Rename mixtral-8x7b-v0.1.Q5_K_M.gguf to mixtral-8x7b-v0.1.Q5_K.gguf
11 months ago
mixtral-8x7b-v0.1.Q6_K.gguf
Safe
38.4 GB
LFS
GGUF model commit (made with llama.cpp commit 8a7b2fa)
11 months ago
mixtral-8x7b-v0.1.Q8_0.gguf
Safe
49.6 GB
LFS
GGUF model commit (made with llama.cpp commit 8a7b2fa)
11 months ago