morriszms commited on
Commit
498ab4c
1 Parent(s): ce494ec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -12
README.md CHANGED
@@ -1613,8 +1613,16 @@ This repo contains GGUF format model files for [bigscience/bloom-3b](https://hug
1613
 
1614
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
1615
 
 
 
 
 
 
 
 
1616
  ## Prompt template
1617
 
 
1618
  ```
1619
 
1620
  ```
@@ -1623,18 +1631,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
1623
 
1624
  | Filename | Quant type | File Size | Description |
1625
  | -------- | ---------- | --------- | ----------- |
1626
- | [bloom-3b-Q2_K.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q2_K.gguf) | Q2_K | 1.516 GB | smallest, significant quality loss - not recommended for most purposes |
1627
- | [bloom-3b-Q3_K_S.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q3_K_S.gguf) | Q3_K_S | 1.707 GB | very small, high quality loss |
1628
- | [bloom-3b-Q3_K_M.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q3_K_M.gguf) | Q3_K_M | 1.905 GB | very small, high quality loss |
1629
- | [bloom-3b-Q3_K_L.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q3_K_L.gguf) | Q3_K_L | 2.016 GB | small, substantial quality loss |
1630
- | [bloom-3b-Q4_0.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q4_0.gguf) | Q4_0 | 2.079 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
1631
- | [bloom-3b-Q4_K_S.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q4_K_S.gguf) | Q4_K_S | 2.088 GB | small, greater quality loss |
1632
- | [bloom-3b-Q4_K_M.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q4_K_M.gguf) | Q4_K_M | 2.235 GB | medium, balanced quality - recommended |
1633
- | [bloom-3b-Q5_0.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q5_0.gguf) | Q5_0 | 2.428 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
1634
- | [bloom-3b-Q5_K_S.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q5_K_S.gguf) | Q5_K_S | 2.428 GB | large, low quality loss - recommended |
1635
- | [bloom-3b-Q5_K_M.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q5_K_M.gguf) | Q5_K_M | 2.546 GB | large, very low quality loss - recommended |
1636
- | [bloom-3b-Q6_K.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q6_K.gguf) | Q6_K | 2.799 GB | very large, extremely low quality loss |
1637
- | [bloom-3b-Q8_0.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/tree/main/bloom-3b-Q8_0.gguf) | Q8_0 | 3.621 GB | very large, extremely low quality loss - not recommended |
1638
 
1639
 
1640
  ## Downloading instruction
 
1613
 
1614
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
1615
 
1616
+
1617
+ <div style="text-align: left; margin: 20px 0;">
1618
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
1619
+ Run them on the TensorBlock client using your local machine ↗
1620
+ </a>
1621
+ </div>
1622
+
1623
  ## Prompt template
1624
 
1625
+
1626
  ```
1627
 
1628
  ```
 
1631
 
1632
  | Filename | Quant type | File Size | Description |
1633
  | -------- | ---------- | --------- | ----------- |
1634
+ | [bloom-3b-Q2_K.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q2_K.gguf) | Q2_K | 1.516 GB | smallest, significant quality loss - not recommended for most purposes |
1635
+ | [bloom-3b-Q3_K_S.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q3_K_S.gguf) | Q3_K_S | 1.707 GB | very small, high quality loss |
1636
+ | [bloom-3b-Q3_K_M.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q3_K_M.gguf) | Q3_K_M | 1.905 GB | very small, high quality loss |
1637
+ | [bloom-3b-Q3_K_L.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q3_K_L.gguf) | Q3_K_L | 2.016 GB | small, substantial quality loss |
1638
+ | [bloom-3b-Q4_0.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q4_0.gguf) | Q4_0 | 2.079 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
1639
+ | [bloom-3b-Q4_K_S.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q4_K_S.gguf) | Q4_K_S | 2.088 GB | small, greater quality loss |
1640
+ | [bloom-3b-Q4_K_M.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q4_K_M.gguf) | Q4_K_M | 2.235 GB | medium, balanced quality - recommended |
1641
+ | [bloom-3b-Q5_0.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q5_0.gguf) | Q5_0 | 2.428 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
1642
+ | [bloom-3b-Q5_K_S.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q5_K_S.gguf) | Q5_K_S | 2.428 GB | large, low quality loss - recommended |
1643
+ | [bloom-3b-Q5_K_M.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q5_K_M.gguf) | Q5_K_M | 2.546 GB | large, very low quality loss - recommended |
1644
+ | [bloom-3b-Q6_K.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q6_K.gguf) | Q6_K | 2.799 GB | very large, extremely low quality loss |
1645
+ | [bloom-3b-Q8_0.gguf](https://huggingface.co/tensorblock/bloom-3b-GGUF/blob/main/bloom-3b-Q8_0.gguf) | Q8_0 | 3.621 GB | very large, extremely low quality loss - not recommended |
1646
 
1647
 
1648
  ## Downloading instruction