Update README.md
Browse files
README.md
CHANGED
@@ -1,9 +1,12 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-2.0
|
|
|
3 |
---
|
4 |
|
5 |
-
These are
|
6 |
|
7 |
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`.
|
8 |
|
9 |
-
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later.
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-2.0
|
3 |
+
language: en
|
4 |
---
|
5 |
|
6 |
+
These are GGUF quantized versions of [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf).
|
7 |
|
8 |
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`.
|
9 |
|
10 |
+
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later. The IQ3_XXS requires version `f4d7e54` or later.
|
11 |
+
|
12 |
+
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf`
|