Upload README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: text-generation
|
|
8 |
license: cc-by-nc-sa-4.0
|
9 |
---
|
10 |
|
11 |
-
# **PlatYi-34B-Llama-
|
12 |
<img src='./PlatYi.png' width=256>
|
13 |
|
14 |
## Model Details
|
@@ -20,7 +20,7 @@ license: cc-by-nc-sa-4.0
|
|
20 |
**Output** Models generate text only.
|
21 |
|
22 |
**Model Architecture**
|
23 |
-
PlatYi-34B-Llama-
|
24 |
|
25 |
**Blog Link**
|
26 |
Blog: [Coming soon...]
|
@@ -48,7 +48,7 @@ The lora_r values is 64.
|
|
48 |
|
49 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|
50 |
| --- | --- | --- | --- | --- | --- | --- | --- |
|
51 |
-
| PlatYi-34B-Llama-
|
52 |
| PlatYi-34B-Llama-Q-v2 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
|
53 |
| PlatYi-34B-Llama-Q | 71.13 | 65.70 | 85.22 | 78.78 | 53.64 | 83.03 | 60.42 |
|
54 |
| PlatYi-34B-Llama | 68.37 | 67.83 | 85.35 | 78.26 | 53.46 | 82.87 | 42.46 |
|
@@ -62,7 +62,7 @@ The lora_r values is 64.
|
|
62 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
63 |
import torch
|
64 |
|
65 |
-
repo = "kyujinpy/PlatYi-34B-Llama-
|
66 |
OpenOrca = AutoModelForCausalLM.from_pretrained(
|
67 |
repo,
|
68 |
return_dict=True,
|
|
|
8 |
license: cc-by-nc-sa-4.0
|
9 |
---
|
10 |
|
11 |
+
# **PlatYi-34B-Llama-Q-v3**
|
12 |
<img src='./PlatYi.png' width=256>
|
13 |
|
14 |
## Model Details
|
|
|
20 |
**Output** Models generate text only.
|
21 |
|
22 |
**Model Architecture**
|
23 |
+
PlatYi-34B-Llama-Q-v3 is an auto-regressive language model based on the Yi-34B transformer architecture.
|
24 |
|
25 |
**Blog Link**
|
26 |
Blog: [Coming soon...]
|
|
|
48 |
|
49 |
| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
|
50 |
| --- | --- | --- | --- | --- | --- | --- | --- |
|
51 |
+
| PlatYi-34B-Llama-Q-v3 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
|
52 |
| PlatYi-34B-Llama-Q-v2 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
|
53 |
| PlatYi-34B-Llama-Q | 71.13 | 65.70 | 85.22 | 78.78 | 53.64 | 83.03 | 60.42 |
|
54 |
| PlatYi-34B-Llama | 68.37 | 67.83 | 85.35 | 78.26 | 53.46 | 82.87 | 42.46 |
|
|
|
62 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
63 |
import torch
|
64 |
|
65 |
+
repo = "kyujinpy/PlatYi-34B-Llama-Q-v3"
|
66 |
OpenOrca = AutoModelForCausalLM.from_pretrained(
|
67 |
repo,
|
68 |
return_dict=True,
|