Update README.md
Browse files
README.md
CHANGED
@@ -17,12 +17,12 @@ This repository contains **Llama-3-Chinese-8B-Instruct-v2-GGUF** (llama.cpp/olla
|
|
17 |
|
18 |
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
|
19 |
|
20 |
-
*Note: PPL for v2 models are higher than v1, as the v2's base model (Meta-Llama-3-8B-Instruct) also has a larger PPL than v1's (Meta-Llama-3-8B).*
|
21 |
-
|
22 |
## Performance
|
23 |
|
24 |
Metric: PPL, lower is better
|
25 |
|
|
|
|
|
26 |
| Quant | Size | PPL |
|
27 |
| :---: | -------: | ------------------: |
|
28 |
| Q2_K | 2.96 GB | 13.2488 +/- 0.17217 |
|
|
|
17 |
|
18 |
Further details (performance, usage, etc.) should refer to GitHub project page: https://github.com/ymcui/Chinese-LLaMA-Alpaca-3
|
19 |
|
|
|
|
|
20 |
## Performance
|
21 |
|
22 |
Metric: PPL, lower is better
|
23 |
|
24 |
+
*Note: PPL for v2 models are higher than v1, as the v2's base model (Meta-Llama-3-8B-Instruct) also has a larger PPL than v1's (Meta-Llama-3-8B).*
|
25 |
+
|
26 |
| Quant | Size | PPL |
|
27 |
| :---: | -------: | ------------------: |
|
28 |
| Q2_K | 2.96 GB | 13.2488 +/- 0.17217 |
|