morriszms commited on
Commit
31c033c
1 Parent(s): 4e57930

Upload folder using huggingface_hub

Browse files
Mistral-Nemo-Instruct-2407-Q2_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6fd94ea6e8b7474d029708ea9aa860f01b8c3bc0c62379de200b117e955405f2
3
- size 4791050848
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20b37d43337d2e573f7974a8b280633dd2dbac7d9f5dc4c4350d2b4b3399d494
3
+ size 4791051104
Mistral-Nemo-Instruct-2407-Q3_K_L.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a61e26f70fd0d011a9c2d912d13a93377782f32e1a96c9ff0f1adba5512fff6c
3
- size 6561505888
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a012207d0e8439dab761f2d00fe040d45f83af1f57f9a48ecf30c00ad027f89
3
+ size 6561506144
Mistral-Nemo-Instruct-2407-Q3_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ebb54d86a4d4ed67f53b8ce0bbb7eba7fefec2eb47915914a2699f1e3bfc0eb
3
- size 6083093088
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb28513142192ec8140585141741d2d0ab9362002ba76d6648f9d1bf9c93fcda
3
+ size 6083093344
Mistral-Nemo-Instruct-2407-Q3_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c6f54b95ae80671e93a20b89492a16c47fe83a99ea3a53a24462c976de56ff34
3
- size 5534229088
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a9b3b8be7cf54489ac7b2bd206d53a2d5a8f3790f38dd2d75455db28789bcdc
3
+ size 5534229344
Mistral-Nemo-Instruct-2407-Q4_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:76b6becad280986b0ce6c8b55db2c33d49f905ad0583ac54b79db30c3f0e6afa
3
- size 7071703648
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f709d910828ca5e91f6d34ee86080e6dfeb4bac23c7b51bba699db0e650866e2
3
+ size 7071703904
Mistral-Nemo-Instruct-2407-Q4_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:336b07631feebe5adb6c45e96a20d80487986a8611c58c97b62fcb0ba9ea6215
3
- size 7477207648
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e98a46cabfb42358e3d1d190d28622cb7cd18ce0bbac4ec64beaef7eb21e3bbe
3
+ size 7477207904
Mistral-Nemo-Instruct-2407-Q4_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5f2c6ae6b7e88fb08f5a39776b8e9ec170593a7044f997d4de4a399dd4c5dcc2
3
- size 7120200288
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e8ebd01782afdb2d7452665100d743fd893cb6c44b515fe7139e104e3a9897c
3
+ size 7120200544
Mistral-Nemo-Instruct-2407-Q5_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2d7d0e627340909704cccf92fd4e87b0f217a37b3f53832cc1f665b954f1acb4
3
- size 8518738528
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0dbb9f9ef089f3916c7bca0817ed4f59c803ba41f5759f94e2b490e1f6dcc357
3
+ size 8518738784
Mistral-Nemo-Instruct-2407-Q5_K_M.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e82bfc5bd17ca94c1246d55118081b4912a74632563fb5edb4668dce64b18d81
3
- size 8727634528
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c31a5de8dbf1e0fe7a9c5c4ae56636d7e401e7e7b3693a3dbd1520f1d2e204bc
3
+ size 8727634784
Mistral-Nemo-Instruct-2407-Q5_K_S.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c17457bea669e05116ee840a0701d33b97e1553d3dcd97a2d6b6ce02bea18d2a
3
- size 8518738528
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d302b785f45f4dfcd93101c3c1f81431201780ea4bb82ce4e6ab251bd0d4f9c4
3
+ size 8518738784
Mistral-Nemo-Instruct-2407-Q6_K.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:22ca46d4e38e56e394bf09b32e39eab5d3ec6bee10b3b042061c6b4433acb138
3
- size 10056213088
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90ea830ccb3f1abc21062c135f00e88c6ed7dc874fb9a4539d18f5a15149d484
3
+ size 10056213344
Mistral-Nemo-Instruct-2407-Q8_0.gguf CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6bd7fd6657f73af10c00228102fdf7f67512fffcd2c4ab442f55c6c48a39be50
3
- size 13022372448
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:681f8e0fb182d5990da91af404a672c80ff66c8b08b7740fb3ee5f005b5db651
3
+ size 13022372704
README.md CHANGED
@@ -1,15 +1,21 @@
1
  ---
2
  language:
3
  - en
4
- library_name: transformers
 
 
 
 
 
 
 
5
  license: apache-2.0
 
 
 
6
  tags:
7
- - mistral
8
- - unsloth
9
- - transformers
10
  - TensorBlock
11
  - GGUF
12
- base_model: unsloth/Mistral-Nemo-Instruct-2407
13
  ---
14
 
15
  <div style="width: auto; margin-left: auto; margin-right: auto">
@@ -23,13 +29,12 @@ base_model: unsloth/Mistral-Nemo-Instruct-2407
23
  </div>
24
  </div>
25
 
26
- ## unsloth/Mistral-Nemo-Instruct-2407 - GGUF
27
 
28
- This repo contains GGUF format model files for [unsloth/Mistral-Nemo-Instruct-2407](https://huggingface.co/unsloth/Mistral-Nemo-Instruct-2407).
29
 
30
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
31
 
32
-
33
  <div style="text-align: left; margin: 20px 0;">
34
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
35
  Run them on the TensorBlock client using your local machine ↗
@@ -38,7 +43,6 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
38
 
39
  ## Prompt template
40
 
41
-
42
  ```
43
  <s>[INST]{system_prompt}
44
 
@@ -49,18 +53,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
49
 
50
  | Filename | Quant type | File Size | Description |
51
  | -------- | ---------- | --------- | ----------- |
52
- | [Mistral-Nemo-Instruct-2407-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q2_K.gguf) | Q2_K | 4.462 GB | smallest, significant quality loss - not recommended for most purposes |
53
- | [Mistral-Nemo-Instruct-2407-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_S.gguf) | Q3_K_S | 5.154 GB | very small, high quality loss |
54
- | [Mistral-Nemo-Instruct-2407-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_M.gguf) | Q3_K_M | 5.665 GB | very small, high quality loss |
55
- | [Mistral-Nemo-Instruct-2407-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_L.gguf) | Q3_K_L | 6.111 GB | small, substantial quality loss |
56
- | [Mistral-Nemo-Instruct-2407-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_0.gguf) | Q4_0 | 6.586 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
57
- | [Mistral-Nemo-Instruct-2407-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_S.gguf) | Q4_K_S | 6.631 GB | small, greater quality loss |
58
- | [Mistral-Nemo-Instruct-2407-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_M.gguf) | Q4_K_M | 6.964 GB | medium, balanced quality - recommended |
59
- | [Mistral-Nemo-Instruct-2407-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_0.gguf) | Q5_0 | 7.934 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
60
- | [Mistral-Nemo-Instruct-2407-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_S.gguf) | Q5_K_S | 7.934 GB | large, low quality loss - recommended |
61
- | [Mistral-Nemo-Instruct-2407-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_M.gguf) | Q5_K_M | 8.128 GB | large, very low quality loss - recommended |
62
- | [Mistral-Nemo-Instruct-2407-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q6_K.gguf) | Q6_K | 9.366 GB | very large, extremely low quality loss |
63
- | [Mistral-Nemo-Instruct-2407-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q8_0.gguf) | Q8_0 | 12.128 GB | very large, extremely low quality loss - not recommended |
64
 
65
 
66
  ## Downloading instruction
 
1
  ---
2
  language:
3
  - en
4
+ - fr
5
+ - de
6
+ - es
7
+ - it
8
+ - pt
9
+ - ru
10
+ - zh
11
+ - ja
12
  license: apache-2.0
13
+ base_model: mistralai/Mistral-Nemo-Instruct-2407
14
+ extra_gated_description: If you want to learn more about how we process your personal
15
+ data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
16
  tags:
 
 
 
17
  - TensorBlock
18
  - GGUF
 
19
  ---
20
 
21
  <div style="width: auto; margin-left: auto; margin-right: auto">
 
29
  </div>
30
  </div>
31
 
32
+ ## mistralai/Mistral-Nemo-Instruct-2407 - GGUF
33
 
34
+ This repo contains GGUF format model files for [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407).
35
 
36
  The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
37
 
 
38
  <div style="text-align: left; margin: 20px 0;">
39
  <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
40
  Run them on the TensorBlock client using your local machine ↗
 
43
 
44
  ## Prompt template
45
 
 
46
  ```
47
  <s>[INST]{system_prompt}
48
 
 
53
 
54
  | Filename | Quant type | File Size | Description |
55
  | -------- | ---------- | --------- | ----------- |
56
+ | [Mistral-Nemo-Instruct-2407-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q2_K.gguf) | Q2_K | 4.791 GB | smallest, significant quality loss - not recommended for most purposes |
57
+ | [Mistral-Nemo-Instruct-2407-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_S.gguf) | Q3_K_S | 5.534 GB | very small, high quality loss |
58
+ | [Mistral-Nemo-Instruct-2407-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_M.gguf) | Q3_K_M | 6.083 GB | very small, high quality loss |
59
+ | [Mistral-Nemo-Instruct-2407-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_L.gguf) | Q3_K_L | 6.562 GB | small, substantial quality loss |
60
+ | [Mistral-Nemo-Instruct-2407-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_0.gguf) | Q4_0 | 7.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
61
+ | [Mistral-Nemo-Instruct-2407-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_S.gguf) | Q4_K_S | 7.120 GB | small, greater quality loss |
62
+ | [Mistral-Nemo-Instruct-2407-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_M.gguf) | Q4_K_M | 7.477 GB | medium, balanced quality - recommended |
63
+ | [Mistral-Nemo-Instruct-2407-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_0.gguf) | Q5_0 | 8.519 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
64
+ | [Mistral-Nemo-Instruct-2407-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_S.gguf) | Q5_K_S | 8.519 GB | large, low quality loss - recommended |
65
+ | [Mistral-Nemo-Instruct-2407-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_M.gguf) | Q5_K_M | 8.728 GB | large, very low quality loss - recommended |
66
+ | [Mistral-Nemo-Instruct-2407-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q6_K.gguf) | Q6_K | 10.056 GB | very large, extremely low quality loss |
67
+ | [Mistral-Nemo-Instruct-2407-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q8_0.gguf) | Q8_0 | 13.022 GB | very large, extremely low quality loss - not recommended |
68
 
69
 
70
  ## Downloading instruction