Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Felladrin
/
gguf-sharded-vicuna-160m
like
0
GGUF
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
Deploy
Use this model
main
gguf-sharded-vicuna-160m
1 contributor
History:
3 commits
Felladrin
Create README.md
f11b024
verified
7 months ago
.gitattributes
Safe
2.62 kB
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
README.md
Safe
143 Bytes
Create README.md
7 months ago
vicuna-160m.F16.shard-00001-of-00007.gguf
Safe
49.9 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.F16.shard-00002-of-00007.gguf
Safe
49.2 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.F16.shard-00003-of-00007.gguf
Safe
47.2 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.F16.shard-00004-of-00007.gguf
Safe
47.2 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.F16.shard-00005-of-00007.gguf
Safe
47.2 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.F16.shard-00006-of-00007.gguf
Safe
47.2 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.F16.shard-00007-of-00007.gguf
Safe
37.8 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.Q8_0.shard-00001-of-00007.gguf
Safe
26.8 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.Q8_0.shard-00002-of-00007.gguf
Safe
26.1 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.Q8_0.shard-00003-of-00007.gguf
Safe
25.1 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.Q8_0.shard-00004-of-00007.gguf
Safe
25.1 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.Q8_0.shard-00005-of-00007.gguf
Safe
25.1 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.Q8_0.shard-00006-of-00007.gguf
Safe
25.1 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago
vicuna-160m.Q8_0.shard-00007-of-00007.gguf
Safe
20.1 MB
LFS
Add sharded GGUF version of Q8_0 and F16 quants
7 months ago