Upload folder using huggingface_hub
Browse files- Mistral-Nemo-Instruct-2407-Q2_K.gguf +2 -2
- Mistral-Nemo-Instruct-2407-Q3_K_L.gguf +2 -2
- Mistral-Nemo-Instruct-2407-Q3_K_M.gguf +2 -2
- Mistral-Nemo-Instruct-2407-Q3_K_S.gguf +2 -2
- Mistral-Nemo-Instruct-2407-Q4_0.gguf +2 -2
- Mistral-Nemo-Instruct-2407-Q4_K_M.gguf +2 -2
- Mistral-Nemo-Instruct-2407-Q4_K_S.gguf +2 -2
- Mistral-Nemo-Instruct-2407-Q5_0.gguf +2 -2
- Mistral-Nemo-Instruct-2407-Q5_K_M.gguf +2 -2
- Mistral-Nemo-Instruct-2407-Q5_K_S.gguf +2 -2
- Mistral-Nemo-Instruct-2407-Q6_K.gguf +2 -2
- Mistral-Nemo-Instruct-2407-Q8_0.gguf +2 -2
- README.md +25 -21
Mistral-Nemo-Instruct-2407-Q2_K.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:20b37d43337d2e573f7974a8b280633dd2dbac7d9f5dc4c4350d2b4b3399d494
|
3 |
+
size 4791051104
|
Mistral-Nemo-Instruct-2407-Q3_K_L.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3a012207d0e8439dab761f2d00fe040d45f83af1f57f9a48ecf30c00ad027f89
|
3 |
+
size 6561506144
|
Mistral-Nemo-Instruct-2407-Q3_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:bb28513142192ec8140585141741d2d0ab9362002ba76d6648f9d1bf9c93fcda
|
3 |
+
size 6083093344
|
Mistral-Nemo-Instruct-2407-Q3_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5a9b3b8be7cf54489ac7b2bd206d53a2d5a8f3790f38dd2d75455db28789bcdc
|
3 |
+
size 5534229344
|
Mistral-Nemo-Instruct-2407-Q4_0.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f709d910828ca5e91f6d34ee86080e6dfeb4bac23c7b51bba699db0e650866e2
|
3 |
+
size 7071703904
|
Mistral-Nemo-Instruct-2407-Q4_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e98a46cabfb42358e3d1d190d28622cb7cd18ce0bbac4ec64beaef7eb21e3bbe
|
3 |
+
size 7477207904
|
Mistral-Nemo-Instruct-2407-Q4_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3e8ebd01782afdb2d7452665100d743fd893cb6c44b515fe7139e104e3a9897c
|
3 |
+
size 7120200544
|
Mistral-Nemo-Instruct-2407-Q5_0.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0dbb9f9ef089f3916c7bca0817ed4f59c803ba41f5759f94e2b490e1f6dcc357
|
3 |
+
size 8518738784
|
Mistral-Nemo-Instruct-2407-Q5_K_M.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c31a5de8dbf1e0fe7a9c5c4ae56636d7e401e7e7b3693a3dbd1520f1d2e204bc
|
3 |
+
size 8727634784
|
Mistral-Nemo-Instruct-2407-Q5_K_S.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:d302b785f45f4dfcd93101c3c1f81431201780ea4bb82ce4e6ab251bd0d4f9c4
|
3 |
+
size 8518738784
|
Mistral-Nemo-Instruct-2407-Q6_K.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:90ea830ccb3f1abc21062c135f00e88c6ed7dc874fb9a4539d18f5a15149d484
|
3 |
+
size 10056213344
|
Mistral-Nemo-Instruct-2407-Q8_0.gguf
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:681f8e0fb182d5990da91af404a672c80ff66c8b08b7740fb3ee5f005b5db651
|
3 |
+
size 13022372704
|
README.md
CHANGED
@@ -1,15 +1,21 @@
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
license: apache-2.0
|
|
|
|
|
|
|
6 |
tags:
|
7 |
-
- mistral
|
8 |
-
- unsloth
|
9 |
-
- transformers
|
10 |
- TensorBlock
|
11 |
- GGUF
|
12 |
-
base_model: unsloth/Mistral-Nemo-Instruct-2407
|
13 |
---
|
14 |
|
15 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
@@ -23,13 +29,12 @@ base_model: unsloth/Mistral-Nemo-Instruct-2407
|
|
23 |
</div>
|
24 |
</div>
|
25 |
|
26 |
-
##
|
27 |
|
28 |
-
This repo contains GGUF format model files for [
|
29 |
|
30 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
31 |
|
32 |
-
|
33 |
<div style="text-align: left; margin: 20px 0;">
|
34 |
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
35 |
Run them on the TensorBlock client using your local machine ↗
|
@@ -38,7 +43,6 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
38 |
|
39 |
## Prompt template
|
40 |
|
41 |
-
|
42 |
```
|
43 |
<s>[INST]{system_prompt}
|
44 |
|
@@ -49,18 +53,18 @@ The files were quantized using machines provided by [TensorBlock](https://tensor
|
|
49 |
|
50 |
| Filename | Quant type | File Size | Description |
|
51 |
| -------- | ---------- | --------- | ----------- |
|
52 |
-
| [Mistral-Nemo-Instruct-2407-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q2_K.gguf) | Q2_K | 4.
|
53 |
-
| [Mistral-Nemo-Instruct-2407-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_S.gguf) | Q3_K_S | 5.
|
54 |
-
| [Mistral-Nemo-Instruct-2407-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_M.gguf) | Q3_K_M |
|
55 |
-
| [Mistral-Nemo-Instruct-2407-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_L.gguf) | Q3_K_L | 6.
|
56 |
-
| [Mistral-Nemo-Instruct-2407-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_0.gguf) | Q4_0 |
|
57 |
-
| [Mistral-Nemo-Instruct-2407-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_S.gguf) | Q4_K_S |
|
58 |
-
| [Mistral-Nemo-Instruct-2407-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_M.gguf) | Q4_K_M |
|
59 |
-
| [Mistral-Nemo-Instruct-2407-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_0.gguf) | Q5_0 |
|
60 |
-
| [Mistral-Nemo-Instruct-2407-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_S.gguf) | Q5_K_S |
|
61 |
-
| [Mistral-Nemo-Instruct-2407-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_M.gguf) | Q5_K_M | 8.
|
62 |
-
| [Mistral-Nemo-Instruct-2407-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q6_K.gguf) | Q6_K |
|
63 |
-
| [Mistral-Nemo-Instruct-2407-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q8_0.gguf) | Q8_0 |
|
64 |
|
65 |
|
66 |
## Downloading instruction
|
|
|
1 |
---
|
2 |
language:
|
3 |
- en
|
4 |
+
- fr
|
5 |
+
- de
|
6 |
+
- es
|
7 |
+
- it
|
8 |
+
- pt
|
9 |
+
- ru
|
10 |
+
- zh
|
11 |
+
- ja
|
12 |
license: apache-2.0
|
13 |
+
base_model: mistralai/Mistral-Nemo-Instruct-2407
|
14 |
+
extra_gated_description: If you want to learn more about how we process your personal
|
15 |
+
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
|
16 |
tags:
|
|
|
|
|
|
|
17 |
- TensorBlock
|
18 |
- GGUF
|
|
|
19 |
---
|
20 |
|
21 |
<div style="width: auto; margin-left: auto; margin-right: auto">
|
|
|
29 |
</div>
|
30 |
</div>
|
31 |
|
32 |
+
## mistralai/Mistral-Nemo-Instruct-2407 - GGUF
|
33 |
|
34 |
+
This repo contains GGUF format model files for [mistralai/Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407).
|
35 |
|
36 |
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
|
37 |
|
|
|
38 |
<div style="text-align: left; margin: 20px 0;">
|
39 |
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
|
40 |
Run them on the TensorBlock client using your local machine ↗
|
|
|
43 |
|
44 |
## Prompt template
|
45 |
|
|
|
46 |
```
|
47 |
<s>[INST]{system_prompt}
|
48 |
|
|
|
53 |
|
54 |
| Filename | Quant type | File Size | Description |
|
55 |
| -------- | ---------- | --------- | ----------- |
|
56 |
+
| [Mistral-Nemo-Instruct-2407-Q2_K.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q2_K.gguf) | Q2_K | 4.791 GB | smallest, significant quality loss - not recommended for most purposes |
|
57 |
+
| [Mistral-Nemo-Instruct-2407-Q3_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_S.gguf) | Q3_K_S | 5.534 GB | very small, high quality loss |
|
58 |
+
| [Mistral-Nemo-Instruct-2407-Q3_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_M.gguf) | Q3_K_M | 6.083 GB | very small, high quality loss |
|
59 |
+
| [Mistral-Nemo-Instruct-2407-Q3_K_L.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_L.gguf) | Q3_K_L | 6.562 GB | small, substantial quality loss |
|
60 |
+
| [Mistral-Nemo-Instruct-2407-Q4_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_0.gguf) | Q4_0 | 7.072 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
61 |
+
| [Mistral-Nemo-Instruct-2407-Q4_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_S.gguf) | Q4_K_S | 7.120 GB | small, greater quality loss |
|
62 |
+
| [Mistral-Nemo-Instruct-2407-Q4_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_M.gguf) | Q4_K_M | 7.477 GB | medium, balanced quality - recommended |
|
63 |
+
| [Mistral-Nemo-Instruct-2407-Q5_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_0.gguf) | Q5_0 | 8.519 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
64 |
+
| [Mistral-Nemo-Instruct-2407-Q5_K_S.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_S.gguf) | Q5_K_S | 8.519 GB | large, low quality loss - recommended |
|
65 |
+
| [Mistral-Nemo-Instruct-2407-Q5_K_M.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_M.gguf) | Q5_K_M | 8.728 GB | large, very low quality loss - recommended |
|
66 |
+
| [Mistral-Nemo-Instruct-2407-Q6_K.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q6_K.gguf) | Q6_K | 10.056 GB | very large, extremely low quality loss |
|
67 |
+
| [Mistral-Nemo-Instruct-2407-Q8_0.gguf](https://huggingface.co/tensorblock/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q8_0.gguf) | Q8_0 | 13.022 GB | very large, extremely low quality loss - not recommended |
|
68 |
|
69 |
|
70 |
## Downloading instruction
|