GGUF Qunat
Collection
Quantized model using gguf format
•
9 items
•
Updated
•
1
This repo contains GGUF format model files for SciPhi's Sensei-7B-V2 .
Name | Quant method | Bits | Size | Max RAM required | Use case |
---|---|---|---|---|---|
sensei-7b-v2.Q2_K.gguf | Q2_K | 2 | 2.72 GB | 5.22 GB | significant quality loss - not recommended for most purposes |
sensei-7b-v2.Q3_K_M.gguf | Q3_K_M | 3 | 3.52 GB | 6.02 GB | very small, high quality loss |
sensei-7b-v2.Q4_K_M.gguf | Q4_K_M | 4 | 4.37 GB | 6.87 GB | medium, balanced quality - recommended |
sensei-7b-v2.Q5_K_M.gguf | Q5_K_M | 5 | 5.13 GB | 7.63 GB | large, very low quality loss - recommended |
sensei-7b-v2.Q6_K.gguf | Q6_K | 6 | 5.94 GB | 8.44 GB | very large, extremely low quality loss |
sensei-7b-v2.Q8_0.gguf | Q8_0 | 8 | 7.70 GB | 10.20 GB | very large, extremely low quality loss - not recommended |
Base model
SciPhi/Sensei-7B-V2